Triggers vs. Default Constraints: Choose Wisely

Triggers can be really useful in your database but you have to be careful with them. We often use them to record history to a separate table (at least, until we can implement temporal tables) or keep track of when a record was last updated. It’s usually implemented via an AFTER UPDATE trigger.

Several times a day, I’m alerted by our monitoring suite that two processes were deadlocked. This is a (mostly) normal event, but one that we’d like to avoid if we can. More than a few times, I observed this pattern:

  • Insert into table
  • Update a field in the row that was just inserted.

In my case, the LastModified update is implemented as an AFTER INSERT, UPDATE trigger. Which means that as I’m inserting the record into the table I’m turning around and immediately updating it. We can see this in both the execution plan and the IO stats provided by SET STATISTICS IO ON.

Example

Here’s the setup. Just a pair of basic tables, one looks like like the other except one uses a default constraint and the other an AFTER INSERT trigger. We’ll set up the tables and then insert one row into each.

Aside: Yes, I realize that I didn’t name my default constraints. In real life, I totally would and do.

And here’s the data on each.

Great, the data matches what we expect! But how much work did SQL Server do? Let’s check out the execution plan and IO stats. First for the trigger:

SQL Server parse and compile time: 
    CPU time = 0 ms, elapsed time = 0 ms.
Table 'triggertest'. Scan count 0, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server parse and compile time: 
    CPU time = 13 ms, elapsed time = 13 ms.
Table 'triggertest'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
    CPU time = 0 ms,  elapsed time = 7 ms.

SQL Server Execution Times:
    CPU time = 15 ms,  elapsed time = 104 ms.

And now with the default constraint:

SQL Server parse and compile time: 
    CPU time = 0 ms, elapsed time = 0 ms.
Table 'defaulttest'. Scan count 0, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
    CPU time = 0 ms,  elapsed time = 6 ms.

We want to spend our SQL Server licensing dollars wisely, so why ask it to do unnecessary work? More CPU time, more IO, and our client is waiting longer for the process to complete (I don’t like making anyone wait).

There’s a second “gotcha” with the AFTER INSERT method that applies to only some use cases. Let’s say you’re loading some historical data into the table, and that data includes the LastModified date. Or maybe your application has a very specific date that should be inserted into the table for this field.

And here’s what the tables look like after the inserts.

That AFTER INSERT trigger just clobbered real data!

To alleviate this, I’m migrating these tables to use a default constraint of GETDATE() on the LastModified fields, and changing the trigger to AFTER UPDATE (in the cases where I do need to write to a history table upon insert, I’m removing the LastModified portion from the AFTER INSERT trigger).

The result is the same – LastModified is set to the insertion date when first creating the record, and updated when updating the record. But the inserts are less work for SQL Server, and I won’t clobber my data.

Advertisements

Quick Table Info Export with PowerShell

This week I had a user come to me asking about how fields were defined on a few tables he was using in writing some reports. Long story short, he’s been tasked with writing some new reports and updating existing ones, but he doesn’t have visibility to the database itself so he’s left with the “ok, let’s try this” approach and then reading error messages to debug when things go sideways. Very tedious.

I asked him for a list of the tables he was most interested in, and while he worked on that I set to work on (of course) a quick & dirty PowerShell script to collect the data he needed – field names, types, and whether they’re nullable.

With the help of dbatools and Doug Finke’s ImportExcel module, the resulting script to retrieve the requisite data and export it to an Excel workbook with one sheet per table is very short. Each table is on its own worksheet, with the header row bold and frozen so it stays in view while scrolling.

Line by line:

  • Fetch the list of tables from a file provided to me
  • Gather a collection of SMO objects for the tables from the server
  • Loop over each table and extracting the relevant data about each
  • Write the table’s information out to a worksheet named for the table in an Excel workbook

The only tricky part was fetching the field lengths for the nvarchar fields. I had to go spelunking into each field’s Properties property to get that. The result:

Export-TableInfo Results

But I can tell this isn’t disco enough for you, so I’m gonna put it right here.
– Justin Hammer, Iron Man 2

You can do this in a one-liner too. It’s not as pretty, but it’s quite doable and the resulting Excel file is identical.

End to end, I turned this around in about 20 minutes and the the requestor’s life got easier in a big hurry. Best of all, when he comes back with another batch of tables for his next set of reports, I can turn the request around in less than five minutes.