Don’t Count on Me

This post is in support of Tim Ford’s (b|t#iwanttohelp challenge. And also written because this has burned me twice in the past 3 months and by blogging about it, hopefully it’ll stick in my mind.

Setup

I’ve recently been doing a bunch of work with stored procedures, trying to improve performance that’s been suffering due to suboptimal queries. Some of this tuning has resulted in creating temporary tables. After making my changes and running the procedures in SSMS, everything looked good – data’s correct, performance metrics are all improved. Everyone wins!

Then I checked the web app. At first, it appeared to work OK. But on reloading the page with different parameters, I got no data, or the data from the previous parameters, or other data that was completely out of left field. Not good!

After a bit of head-scratching, I popped over to the SQL Server Slack and asked for ideas about why I’d be getting different results depending on how the procedure was called. After kicking a few ideas around, someone asked if the procedure included SET NOCOUNT ON. It didn’t, so I added it and my problems were solved!

Explanation

So what happened here?  When you execute a query against SQL Server, both your data and some additional information is sent back to the client. This additional information is sent via a separate channel which is accessible via the SqlConnection.InfoMessages (or if you’re still using classic ADO, the InfoMessage) event. When you run queries in SSMS, you see this information in the Messages tab of the results pane most often as X row(s) affected.

That’s where my new stored procedures were causing problems. Where the original procedures were returning only one event which corresponded to the number of records returned by the single query in each procedure. But now that I’m loading temp tables, I’m getting multiple messages back – at a minimum, a count of the records affected when loading the temp table plus a count of the records returned to the calling application.

I’m not sure what exactly my application was doing with this, but as soon as multiple messages were passed back through InfoMessage(s), it got very confused and started doing unexpected things with the query results. I suspect that it saw multiple events and attempted to use the data associated with the first one, of which there was none because it was just inserting into my temp table.

By starting the stored procedure with SET NOCOUNT ON, InfoMessages is disabled and this additional data isn’t transmitted to the client. It’s also said that this can improve performance (although it’s more about network traffic these days) but my primary interest in using it is to keep client applications that I can’t change from blowing up.

Something I find very interesting is that SSMS ships with two different templates for stored procedures and one does include SET NOCOUNT ON, while the other doesn’t.

Example

Here are three simple stored procedures to demonstrate the effect of this setting.

CREATE OR ALTER PROCEDURE dbo.GetCounties
AS
print 'GetCounties';
select s.name,c.countyname
from states s join counties c on s.StateId = c.StateId;
go

CREATE OR ALTER PROCEDURE dbo.GetCounties2
AS
create table #StatesCounties (
StateName nvarchar(100)
,CountyName nvarchar(100)
);
print 'GetCounties2';
insert into #StatesCounties
select s.name as StateName,c.countyname
from states s join counties c on s.StateId = c.StateId;
select StateName,CountyName from #StatesCounties;
go

CREATE OR ALTER PROCEDURE dbo.GetCounties3
AS
SET NOCOUNT ON
create table #StatesCounties (
StateName nvarchar(100)
,CountyName nvarchar(100)
);
print 'GetCounties3';
insert into #StatesCounties
select s.name as StateName,c.countyname
from states s join counties c on s.StateId = c.StateId;
select StateName,CountyName from #StatesCounties;

And the result of running each, from the SSMS Messages tab.

GetCounties

(122 row(s) affected)
GetCounties2

(122 row(s) affected)

(122 row(s) affected)
GetCounties3

Note how the first reports the number of rows returned, while the second reports both the number of rows inserted into the temp table and the number returned from the query. In the last example, no messages are returned. In all cases, the print messages are returned because they’re explicitly output by my code.

Summary

  • Unless you have a very specific need to get this alternate data stream in your calling application, use SET NOCOUNT ON in your stored procedures
  • The next time you’re working in a stored procedure, add it if it’s not already there
  • Add it to the template you use for creating new stored procedures

T-SQL Tuesday #83: Why Leave Well Enough Alone?

It’s 2016. So why are we still dealing with T-SQL code and design patterns that were designed 7 versions ago?tsql2sday-300x300

 

In the 15 years I have been using databases professionally, we’re still dealing with:

  • Peoples’ names are split into first name, last name and middle initial fields. Ignoring that this falls afoul of several of the myths programmers believe about names, the first name column was defined as CHAR(10) in a standard installation. How many characters are in the name Christopher (hint: I had to take off a shoe to count them all)?
  • Other arbitrarily short column sizes which cause problems as the system scales out in usage. For example, an event ID field that’s 8 characters: 2 letters and a 6-digit number which is used as a sequence. Guess what happens when you hit the millionth event in that sequence.
  • Processes originally developed as transactions (for good reasons), but not designed in such a way that they scale to today’s demands.
  • NOLOCK hints everywhere. It’s even in newly-developed code for this application.
  • Cursors used anytime a set of records has to be updated with a small bit of conditional logic built in. A set-based operation with appropriate CASE statements would work much better.

The primary system I deal with on a daily basis was originally developed as a DOS application and several of the above examples are drawn from it. Looking at the core tables and columns, it’s easy to identify those that began life in those early days – they all have 8-character names. Time moved on and the system grew and evolved. DOS to Windows. Windows to the web. But the database, and the practices and patterns used in the database, haven’t come along for the ride.

Data schema conversions can be hard and disruptive – you need to update your application, your stored procedures, and provide customers/users with a clean migration path. Code changes require testing. Complexity and cost grows every time you introduce changes. I get that.

But by not keeping up with the advancements of the platform your data resides on and ignoring the evolution of how to work with your data, you do your customers, users, partners, colleagues and yourself a disservice.

How do you improve this? I’m certainly not advocating for scrapping everything and rewriting all of your code. Complete rewrites are generally a bad idea.

What I am saying, however, is:

  • You need to be constantly watching the state of the platforms your software runs on. If you drop support for a particular version (say, dropping SQL Server 2005 support as Microsoft no longer supports it), start evaluating the 2008+ features that are now open to you.
  • Drop support for old versions of SQL Server. Don’t let the past shackle your future.
  • Get outside opinions from trusted sources. Whether it be from your local user group, a short consulting engagement, or bringing in new people. But most importantly, when you seek advice, make use of it. Don’t ask for advice and then ignore it.
  • Don’t accept the status quo. Anytime you’re working in a piece of code, review the whole thing. Can this section be cleaned up? Is it even needed anymore? Has the system scaled in usage/data volume that it needs to be re-thought entirely? Have you learned something new from your favorite SQL Server blog or a SQL Saturday event that you can apply to it?

That last point is where everyone responsible for an application or database can make the most impact. To co-opt Baden-Powell’s last message to the Boy Scouts of the world: Leave the code a little better than you found it. If you do this every time you touch a component of your database, you’ll make enough incremental updates that these 15 year old problems will go away and stay away.

Name Your Defaults So SQL Server Doesn’t

Something in SQL Server that isn’t always obvious to beginners in is that when you create a default value for a column on a table, SQL Server creates a constraint (much like a primary or foreign key). All constraints must have a name, and if one isn’t specified SQL Server will generate one for you. For example:

CREATE TABLE [dbo].[point_types] (
[typeid] [int] NOT NULL DEFAULT(NEXT VALUE FOR [pointtypeid])
,[typename] [nvarchar](30) NOT NULL DEFAULT 'Unspecified'
,CONSTRAINT [PK_PointType] PRIMARY KEY CLUSTERED ([typeid] ASC)
)
GO

We’ve got a simple table here and both fields have a default value set (the primary key’s value is generated from a sequence object, pointtypeid). Time goes on, and a change in policy comes up which requires that I change the default value of typename to Unknown. To do this, I have to drop the constraint and re-create it. To find the name of the constraint, I can either ask sp_help, or run this query:

SELECT all_columns.NAME
,default_constraints.NAME
,default_constraints.DEFINITION
FROM sys.all_columns
INNER JOIN sys.tables
ON all_columns.object_id = tables.object_id
INNER JOIN sys.schemas
ON tables.schema_id = schemas.schema_id
INNER JOIN sys.default_constraints
ON all_columns.default_object_id = default_constraints.object_id
WHERE schemas.NAME = 'dbo'
AND tables.NAME = 'point_types';

I’ve got my constraint name now, so I can drop it & re-create it

NameDefaults01

ALTER TABLE [dbo].[point_types]

DROP CONSTRAINT DF__point_typ__typen__21B6055D;
GO

ALTER TABLE [dbo].[point_types] ADD DEFAULT('Unknown')
FOR [typename];
GO

And if I re-run the above query, I can see that the constraint’s name is different.

NameDefaults02

This means that everywhere I need to change this constraint (development, test and production), I’ll need to figure out the constraint name in that particular database and drop it before re-creating it. This makes a deployment script a bit messier, as it needs more code to find those constraint names

DECLARE @constraintname VARCHAR(50);

SELECT @constraintname = default_constraints.NAME
FROM sys.all_columns
INNER JOIN sys.tables
ON all_columns.object_id = tables.object_id
INNER JOIN sys.schemas
ON tables.schema_id = schemas.schema_id
INNER JOIN sys.default_constraints
ON all_columns.default_object_id = default_constraints.object_id
WHERE schemas.NAME = 'dbo'
AND tables.NAME = 'point_types'

DECLARE @sql NVARCHAR(200) = N'alter table [dbo].[point_types] drop constraint ' + @constraintname;

PRINT @sql;

NameDefaults03

EXECUTE sp_executesql @sql;

ALTER TABLE [dbo].[point_types] ADD DEFAULT('Unknown')
FOR [typename];
GO

But this doesn’t really solve my problem, it just works around it. It’s still messy and fragile. If I need to do other operations on the default constraint, I need to go through the same exercise to find its name.

Fortunately, SQL Server lets us name default constraints just like any other constraint, and by doing so we avoid this trouble. By setting my own name for the constraint, I know what it’ll be in every database, without having to query system tables. The name can be set in both the CREATE TABLE statement and an independent ALTER TABLE.

CREATE TABLE [dbo].[point_types] (
[typeid] [int] NOT NULL DEFAULT(NEXT VALUE FOR [pointtypeid])
,[typename] [nvarchar](30) NOT NULL CONSTRAINT [DF_PT_TypeName] DEFAULT 'Unspecified'
,CONSTRAINT [PK_PointType] PRIMARY KEY CLUSTERED ([typeid] ASC)
);
GO

ALTER TABLE [dbo].[point_types]

DROP CONSTRAINT [DF_PT_TypeName];
GO

ALTER TABLE [dbo].[point_types] ADD CONSTRAINT [DF_PT_TypeName] DEFAULT('Unknown')
FOR [typename];
GO

I can also combine these in the next deployment that requires a change to the default constraint, dropping the system-generated name and establishing my own static name to make things simpler in the future.

Is explicitly naming default (or any other) constraints necessary? No, but doing so helps your database document itself, and it makes future deployment/promotion scripts simpler and less prone to breakage. SQL Server needs a name for the constraint regardless; it’s worth specifying it yourself.

SQL New Blogger Challenge November 2015 Edition – Week 3 Digest

This week’s #sqlnewblogger posts!

Author Post
@eleightondick [T-SQL Tuesday] Data modeling: The trouble with prefixes | The Data Files
@tomsql Adventures With TomSQL, aka Tom Staab
@EdDebug Automatically name primary key constraints in SSDT | the.agilesql.club
@rabryst Born SQL on Twitter: “Temporal Tables – Under the Covers with the Transaction Log. 
@YatesSQL Community Involvement–Why Wait? | The SQL Professor
@cjsommer Identity Column Increment Value (EVEN/ODD) | cjsommer.com
@DBA_ANDY Nebraska SQL from @DBA_ANDY: CHECKDB – The database could not be exclusively locked to perform the operation
@ALevyInROC Selectively Locking Down Data – Gracefully – The Rest is Just Code
@eleightondick SQLNewBlogger, Week 3 | The Data Files
@tomsql Being Our Collective Best
@SQLMickey T-SQL Tuesday #72 Summary – Data Modeling Gone Wrong | Mickey’s T-SQL Ponderings

Selectively Locking Down Data – Gracefully

I have a situation where I need to retrieve the data in an encrypted column from, but don’t want to give all my users access to the symmetric key used to encrypt that column. The data is of the sort where it’s important for the  application to produce the required output, but if a user runs the stored procedure to see what the application is getting from it, it’s not critical that they see this one field.

The catch is that if the stored procedure is written with the assumption that the caller has permission to access the encryption key or its certificate, they’ll get an error. After a bit of research and pondering later, I came up with two options:

  1. Create the stored procedure with EXECUTE AS OWNER (the owner in this case is dbo). This would let all users see the encrypted data; not an ideal solution.
  2. Use SQL Server’s TRY/CATCH construct to gracefully handle the error thrown when the user attempts to open the key, but doesn’t have permission to do so.

Let’s check out option 2. This example is simplified from my actual scenario to demonstrate the idea.


declare @BankId varchar(6) = '123456';

SELECT cast('' as varchar(50)) AS AccountNum,
,AccountName
,AccountOwner
INTO #AccountData
FROM dbo.Accounts
WHERE OriginatingBank = @BankId
AND AccountType = 'Checking'

DECLARE @AcctNo VARCHAR(30);

BEGIN TRY
OPEN SYMMETRIC KEY MyKey DECRYPTION BY CERTIFICATE My_Cert

SELECT @AcctNo = CONVERT(VARCHAR, decryptbykey(AccountNum))
FROM dbo.Accounts
WHERE OriginatingBank = @BankId
AND AccountType = 'Checking'

CLOSE SYMMETRIC KEY MyKey
END TRY

BEGIN CATCH
SET @AcctNo = 'Access Restricted';
END CATCH

UPDATE #AccountData SET AccountNum = @AcctNo;

SELECT * FROM #AccountData;

DROP TABLE #AccountData;

TRY/CATCH in T-SQL works similarly to how it does in languages like C# or PowerShell. It allows you to attempt an operation and take care of any error conditions fairly easily.

In this case, I’m attempting to open the encryption key. But if the user doesn’t have permission to do so, it doesn’t terminate the stored procedure with an error. Instead, it jumps to the CATCH block, where I’ve defined an alternate way of handling the situation. Here, if the user doesn’t have the appropriate permissions, they’ll just get “Access Restricted” for the account number, and access to that sensitive data is a little more tightly controlled – while still letting users access the data they do need.

SQL New Blogger Challenge November 2015 Edition – Week 2 Digest

This week’s #sqlnewblogger posts!

Author Post
@arrowdrive Anders On SQL: T-SQL Tuesday #72: Data modelling gone extremely wrong
@rabryst Time After Time – An Introduction to Temporal Tables in SQL Server 2016 using a DeLorean
@EdDebug Deploy SSDT INSERTS in Batches | the.agilesql.club
@ALevyInROC Don’t Trust the Wizard – The Rest is Just Code
@DBA_ANDY Nebraska SQL from @DBA_ANDY: T-SQL Tuesday #72 – Implicit Conversion Problems
@eleightondick SQL New Blogger Challenge: Week 1 recap | The Data Files
@eleightondick SQL New Blogger Challenge: Week 2 ideas | The Data Files
@BeginTry SQL Server 2012 Upgrade: The RPC Server is Unavailable | It's All Just Electrons

Don’t Trust the Wizard

The one wizard you can trust
The one wizard you can trust

If you need to move data from one table into a new table, or even tables in a database into another database, the Import/Export Wizard in SQL Server Management Studio looks pretty tempting. Set up a source & destination, click a few buttons, kick back with a cup of tea and watch the progress bars, right?

It turns out that the wizard just isn’t as smart as it may seem. If you’re not careful, you won’t get what you’re expecting. Let’s check it out.

We’ll start by creating a real simple table in a database, containing a primary key and a computed column.

Create table sample.dbo.SourceTable (
RowId int identity(1,1) not null primary key,
Num1 int not null,
Num2 int not null,
Total as (Num1+Num2)
);

Let’s populate it with a few rows of data, then update some of that data to make sure the computed column is working. Remember, this is just to demonstrate the idea.

insert into sample.dbo.SourceTable (Num1, Num2) values (1,2);
go 100
select top 5 * from sample.dbo.SourceTable order by RowId;
update sample.dbo.SourceTable set Num1 = Num1 * RowId where RowId <= 3;
select top 5 * from sample.dbo.SourceTable order by RowId;

wizard_Screenshot 1

Great! We’ve got data, the computed columns are working, let’s copy it over to a new table in another database. We’ll just going to click Next, Next, Next through the wizard this time around.

wizard_Screenshot 2
wizard_Screenshot 3
wizard_Screenshot 4
wizard_Screenshot 5
Success! Our table has been copied and the data’s all there.

select top 5 * from Sample2.dbo.SourceTable order by RowId;

wizard_Screenshot 6

Let’s do some work on our new table and check out the results.

select top 5 * from Sample2.dbo.SourceTable order by RowId;
update Sample2.dbo.SourceTable set Num2 = Num2 * RowId where RowId < 3;
select top 5 * from Sample2.dbo.SourceTable order by RowId;

wizard_Screenshot 7

Woah! That’s not right. That Total column is supposed to be Num1 + Num2, and last time I checked 2 + 4 was not 4. Let’s keep going and try adding a new record the same way it was done earlier.

insert into Sample2.dbo.SourceTable (Num1, Num2) values (100,200);

Cannot insert the value NULL into column 'RowId', table 'Sample2.dbo.SourceTable'; column does not allow nulls. INSERT fails.

Huh. Now that’s really odd, isn’t it? RowId is supposed to be an identity – we shouldn’t have to populate it. What is going on here? Let’s script out the table.

USE [Sample2]
GO

/****** Object: Table [dbo].[SourceTable] Script Date: 2015-11-10 22:36:23 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[SourceTable](
[RowId] [int] NOT NULL,
[Num1] [int] NOT NULL,
[Num2] [int] NOT NULL,
[Total] [int] NULL
) ON [PRIMARY]

GO

This is all kinds of wrong! What happened to the primary key? Or the computed column? Well, it turns out that the wizard isn’t that smart, and if you just take all the default values, you’re going to get burned. Let’s go back to the wizard and click that Edit Mappings button in the Select Source Tables and Views screen.
wizard_Screenshot 8

Well…that looks like what we got above. And it’s not what we wanted. If we click Edit SQL, this is confirmed – the table being created is not defined the same way the source table is being defined.
wizard_Screenshot 9
Fortunately, we can edit the SQL here and make it match the source table definition, then finish the wizard.
wizard_Screenshot 10
OK, data’s copied – what do we have?

select top 5 * from Sample3.dbo.SourceTable order by RowId;
update Sample3.dbo.SourceTable set Num2 = Num2 * RowId where RowId < 3;
select top 5 * from Sample3.dbo.SourceTable order by RowId;
insert into Sample3.dbo.SourceTable (Num1, Num2) values (100,200);
select * from sample3.dbo.SourceTable where rowid >= 100 order by RowId;

wizard_Screenshot 11

Everything’s there, and it’s working the way it’s supposed to. Lesson learned: don’t blindly trust the defaults, especially the ones in a wizard. Double-check everything, and then verify that your schema works the way you expect it to before doing any business with it.

SQL New Blogger Challenge, November Edition, Week 1 Digest

Ed Leighton-Dick has renewed his New Blogger Challenge this month. Here are all (I think) the posts for this week after Ed posted his announcement. If I’ve missed any, please let me know and I’ll update.

Author Post
@arrowdrive Anders On SQL: First Timer Summit impressions.
@EdDebug Deploy SSDT INSERTS in Batches | the.agilesql.club
@EdDebug Looking at SSDT upgrade scripts | the.agilesql.club
@DBA_ANDY Nebraska SQL from @DBA_ANDY: PASS Summit 2015 Recap
@eleightondick PASS Summit 2015 Highlights | The Data Files
@OliverAsmus PASS Summit 2015: My Experience | OliverAsmus.com
@EdDebug ScriptDom Visualizer | the.agilesql.club
@eleightondick SQL New Blogger Challenge: Looking back… and a new challenge! | The Data Files
@Clem1029 Tearing down the wall | SQLDEV@Clemsplace
@ALevyInROC Why Ask Why? – The Rest is Just Code
@rabryst The SQL Server Family

Why Ask Why?

Spend any time around a 4 year old, and you will inevitably find yourself involved in a conversation which evolves into this:

  • Please do this thing
  • Why?
  • Reasonable answer
  • Why?
  • Restatement of reasonable answer
  • Why?
  • Shorter, more frustrated restatement of reasonable answer
  • Why?
  • Because that’s what has to be done
  • Why?
  • Because
  • Why?
  • I give up. Go ask your other parent

It’s a simple, but powerful and important question. The trouble is that when it’s a 4 year old asking it, in a lot of cases they can’t understand the answer. More often, they aren’t interested in understanding it.

Fortunately, there aren’t any 4 year olds in the average IT shop (although it may not be too far off).

A while ago, a data issue came to my team. Nothing major, but enough that it caused problems for a user. It’s a small glitch with an application component which pops up maybe once or twice a year, so it’s been decided that it’s better to just fix the data in those rare cases as opposed to spending 20 hours tracking down the root cause & fixing it there (I’m the SME for this component).

The requested correction was to delete the entire record, based on a previous fix to a similar but unrelated data problem. By the time I saw the request, a teammate had picked it up & started working on it.

“Wait! Don’t do it that way!” I said. “All we should be doing here is fixing the one erroneous field on that record.” This had come up in the past, but with it happening so rarely it’s easy to forget about.

I paused to catch my breath, then heard it.

Why?

I had to pause even longer to collect my thoughts. I don’t often get asked questions on things like this but I wish it happened daily.

This is the moment in which knowledge is gained, even by the answerer.

When you live & breathe a system for years on end, it’s easy to take certain aspects of it for granted. You respond without having to think about why things work the way they do – you just know that’s how it is.

The ensuing conversation was productive and I hope informative for my co-workers. While deleting the record would have the desired short-term result (making the application function properly), in the long term it would break the link between the data and a document which is referenced by that data. A net loss. Fixing the one column (setting it to the value which it should have been in the first place) allows the application to function correctly and retain access to that referenced document.

The conversation also forced me to take a closer look at my own understanding of the issue and re-evaluate what I thought I knew about it. It turns out, I had some bad assumptions in there too, which I was able to correct.

Not only did my teammates learn, I learned too. Everyone wins.

So why was the original solution of deleting the whole record requested? The answer isn’t too far removed from the idea of cargo cult programming. That is, someone saw the solution of deleting the whole record used in a similar case years ago, documented it, and it was seen as the One True Answer from that point forward – regardless of its applicability to the situation at hand.  A detailed explanation of “why” isn’t usually written for every issue that comes to our team for resolution, for a few reasons:

  • We don’t think to do it.
  • There isn’t a good way to distinguish between this bug in the system and others without having a fairly deep knowledge of the system.
  • There isn’t a way in our ticketing system to record information that isn’t visible to everyone, and the whole company does not need to see the dirty details of the internals of every system – in fact, it would probably be counterproductive.

In hindsight, a carefully-written, more thorough explanation many years ago may have prevented this particular request from being written as it was.

Asking why became the basis for Toyota’s approach to improving their manufacturing processes, and is built into Six Sigma and many other process improvement methodologies. This one word is the gateway to understanding, and once we understand, we can find ways to do things better.

If you’re curious about something, release your inner 4 year old. Just don’t act like a 4 year old when you do it. Keep asking why, get to the answers – and make sure you understand them.

If someone asks you why, embrace the question. This person is interested, they’re engaged, they want to learn! Take advantage of that opportunity to teach and spread that knowledge. Along the way, you just might learn something yourself.

Hello GETDATE() My Old Friend…

So you’ve decided that your new web application needs to record some page load time metrics so you can keep tabs on performance. Terrific!  You set up a couple page load/complete functions to write to a logging table when a page request comes in, and then update the record when it finishes loading.

INSERT INTO PageLogs (
    RequestTime
   ,ResponseTime
   ,RemoteIP
   ,UserName
   ,PageId
   ,Parameters
   ,SessionId
) VALUES (
    GETDATE() 
    ,NULL
    ,127.0.0.1
    ,'Dave'
    ,'Home'
    ,'Pd=2015Q2'
    ,'883666b1-99be-48c8-bf59-5a5739bc7d1d'
);
UPDATE PageLogs
    SET ResponseTime = GETDATE()
    WHERE SessionId = '883666b1-99be-48c8-bf59-5a5739bc7d1d';

You set up an hourly job to delete any logs older than 2 weeks (just to prevent information overload) and you call it a day. Each morning, you run a report to look at the previous day’s performance, watch the trends over the past week or so, and you’re pretty happy with things. Pages are loading in a fraction of a second, according to the logs. People find the application useful, word spreads around the office, and adoption takes off. The project is a success!

Then the support calls start rolling in. Users say it’s taking “forever” to load pages (they don’t have exact numbers, but it’s just too slow). This can’t be possible. The report says everything’s running just as fast as it did in test!

You walk down the hall and visit your friendly Senior DBA. He pulls up his monitoring tools and shows you that the hourly maintenance that keeps the PageLogs table fit & trim is causing a bunch of blocking while it does lots of DELETEs. And your INSERT queries are the victims.

Here’s the thing: GETDATE() (like any other function) doesn’t get evaluated until that query executes. Not when you call ExecuteNonQuery(), not even when SQL Server receives the query. So even if your INSERT isn’t holding up the execution of your page (because you’ve executed it asynchronously), it won’t accurately represent when the page load started. Instead, it tells you when your query executed. In this context that can be misleading because it won’t tell you how long it really took for your page to load.

If you need to log the time an event transpired accurately, GETDATE() isn’t your friend. You need to explicitly set the time in the query.

INSERT INTO PageLogs (
    RequestTime
   ,ResponseTime
   ,RemoteIP
   ,UserName
   ,PageId
   ,Parameters
   ,SessionId
) VALUES (
    '2015-05-15T09:45:00Z' 
    ,NULL
    ,127.0.0.1
    ,'Dave'
    ,'Home'
    ,'Pd=2015Q2'
    ,'883666b1-99be-48c8-bf59-5a5739bc7d1d'
);
UPDATE PageLogs
    SET ResponseTime = '2015-05-15T09:45:02Z'
    WHERE SessionId = '883666b1-99be-48c8-bf59-5a5739bc7d1d';

If you aren’t used to seeing significant blocking in your databases, you may not have run into this. But get into this habit anyway. At some point you probably will see blocking on a table like this, and logging with GETDATE() will make the data you attempted to write during that blocking invalid. If you can’t trust all of your data, can you trust any of it?