Triggers vs. Default Constraints: Choose Wisely

Triggers can be really useful in your database but you have to be careful with them. We often use them to record history to a separate table (at least, until we can implement temporal tables) or keep track of when a record was last updated. It’s usually implemented via an AFTER UPDATE trigger.

Several times a day, I’m alerted by our monitoring suite that two processes were deadlocked. This is a (mostly) normal event, but one that we’d like to avoid if we can. More than a few times, I observed this pattern:

  • Insert into table
  • Update a field in the row that was just inserted.

In my case, the LastModified update is implemented as an AFTER INSERT, UPDATE trigger. Which means that as I’m inserting the record into the table I’m turning around and immediately updating it. We can see this in both the execution plan and the IO stats provided by SET STATISTICS IO ON.


Here’s the setup. Just a pair of basic tables, one looks like like the other except one uses a default constraint and the other an AFTER INSERT trigger. We’ll set up the tables and then insert one row into each.

Aside: Yes, I realize that I didn’t name my default constraints. In real life, I totally would and do.

USE Demo;
/* Just to prove there's no shenanigans */
CREATE TABLE triggertest
(id INT IDENTITY(1, 1),
CREATE TABLE defaulttest
(id INT IDENTITY(1, 1),
CREATE OR ALTER TRIGGER tr_triggertest ON [triggertest]
UPDATE triggertest
FROM inserted
INSERT INTO triggertest(Home,Away)
INSERT INTO defaulttest(Home,Away)
select * from triggertest;
select * from defaulttest;

And here’s the data on each.

Great, the data matches what we expect! But how much work did SQL Server do? Let’s check out the execution plan and IO stats. First for the trigger:

SQL Server parse and compile time: 
    CPU time = 0 ms, elapsed time = 0 ms.
Table 'triggertest'. Scan count 0, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server parse and compile time: 
    CPU time = 13 ms, elapsed time = 13 ms.
Table 'triggertest'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
    CPU time = 0 ms,  elapsed time = 7 ms.

SQL Server Execution Times:
    CPU time = 15 ms,  elapsed time = 104 ms.

And now with the default constraint:

SQL Server parse and compile time: 
    CPU time = 0 ms, elapsed time = 0 ms.
Table 'defaulttest'. Scan count 0, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
    CPU time = 0 ms,  elapsed time = 6 ms.

We want to spend our SQL Server licensing dollars wisely, so why ask it to do unnecessary work? More CPU time, more IO, and our client is waiting longer for the process to complete (I don’t like making anyone wait).

There’s a second “gotcha” with the AFTER INSERT method that applies to only some use cases. Let’s say you’re loading some historical data into the table, and that data includes the LastModified date. Or maybe your application has a very specific date that should be inserted into the table for this field.

insert into triggertest(Home,Away,LastModified) values(14,17,'2008-02-03 22:05:00 -05:00');
select * from triggertest;
insert into defaulttest(Home,Away,LastModified) values(14,17,'2008-02-03 22:05:00 -05:00');
select * from defaulttest;

And here’s what the tables look like after the inserts.

That AFTER INSERT trigger just clobbered real data!

To alleviate this, I’m migrating these tables to use a default constraint of GETDATE() on the LastModified fields, and changing the trigger to AFTER UPDATE (in the cases where I do need to write to a history table upon insert, I’m removing the LastModified portion from the AFTER INSERT trigger).

The result is the same – LastModified is set to the insertion date when first creating the record, and updated when updating the record. But the inserts are less work for SQL Server, and I won’t clobber my data.

8 thoughts on “Triggers vs. Default Constraints: Choose Wisely

  1. Pingback: Trigger Or Constraint? – Curated SQL

  2. Interesting article. Default constraints work well, but not so much for an update of an existing record. I wonder what the impact of forcing developers to include the update to “audit columns” in there code. A check constraint could be added to reject data where the last modified is outside of some range… i.e. datediff(second, LastModified, getdate()) < 2. It would force developers into including this info as part of their updates. The down side is large updates may take longer than the allowed window – and – ETL processes may be affected. Oh well, things to ponder in development.


  3. related to the trigger approach. You could use if IF NOT UPDATE(LastModified)… to only modify the value when it wasn’t modified.


  4. The default constraint will set LastModified when a new record is created, but won’t change it when a record is modified.


  5. If the issue with the trigger is that it should only be updating LastModified if the field is NULL then that can be rectified by wrapping SYSDATETIMEOFFSET() with IsNULL:

    LastModified = IsNull(LastModified, SYSDATETIMEOFFSET()).

    That said, the use case is highly contingent on business rules. Some organizations might consider an import of data to be a modification in which case LastModified should be overwritten even if it is populated. In such circumstances, the approach using default field values would not be correct.

    Another possible issue with the approach that uses default values is that if the record is created explicitly with a NULL value, SQL Server will not invoke the default constraint because a “value” was explicitly specified. One might ask “why would one do that?” It may not be a choice. Quite a few ETL tools, SSIS packages, etc. might specify NULL for the column.

    Another consideration is when doing imports, triggers can be disabled. In fact, disabling triggers is a very normal thing to do when importing data. This further muddies the water. With triggers disabled, the approach using a default value would work (provided LastModified isn’t explicitly set to NULL) but the trigger solution would fail.

    One last advantage the trigger solution has is that the trigger can be set to handle UPDATE in addition to INSERT. This means that LastModified can be automatically updated by the database all of the time and not only on insert.

    But that increased flexibility does come at a cost in performance. And, also perhaps, some future unexpected surprises. Many times future dba/developers won’t know that triggers are there and will wonder where the LastModified values are coming from.


  6. A Clustered Index would help with performance 🙂

    @Paul Hunter UPDATE(LastModified) only indicates that the column was included in the Insert / Update statement, not that it was modified. I suppose the fact that it was included indicates a value was assigned, so that might be good enough, but the column’s value might be NULL or might not have changed e.g. SET LastModified = COALESCE(LastModified, GetDate())

    Whilst all my SProcs could be built to explicitly set ModifyDate it would be easy to forget to include that in a quick & dirty:

    UPDATE MyTable SET MyStatus=123 WHERE SomeColumn=’XYZ’

    and thus we prefer to sacrifice some performance for the safety net that a Trigger saves the DEVs from themselves 🙂

    Our stuff is all OLTP. Except in rare circumstances all our tables have AFTER triggers and also a corresponding Archive table. Whilst the two-disk-updates side-effect of the After Trigger has a cost, I like the fact that there is no by-pass (other than deliberately disable the trigger) and Modify Date/Time (and, in our case, Version No) are guaranteed.

    We also RTRIM all string columns and (with a few specific exceptions) set any empty string column to NULL, thus avoiding the need for:

    WHERE (MyColumn IS NULL OR MyColumn = ”)

    a decision that we took many decades ago to not [be able to / need to] differentiate between NULL and Empty String (unless explicitly handled differently – but offhand I can’t think of a single instance where we have done that).

    We use AFTER UPDATE/DELETE aspect to store the DELETED record into Archive table (we decided that no point storing the Current record in Archive Table as its the same as the one in the actual table), and also to set Modify date and User [if NULL, as second-best by using user_name() ] (AFTER UPDATE). We set Create date using a DEFAULT and require that to be NOT NULL.

    We typically disable triggers during import (i.e. where we want the Update Date to be that of the original source data, and Archive integrity is not critical).

    We purge stale rows in Archive table (i.e. Older than X) but retain at least one row (so that it is possible to say “This record has not been modified in last X days, but was previously modified on Y [and by Z]”).

    We use a synthetic User ID (e.g. “!123”) to indicate the last update was by a “process” (i.e. we can deduce what that process was)

    We also increment a VersionNo column on every update (usually Small-INT, sometimes INT), which we then use for Optimistic Locking (all our data entry forms are web-based). We also have a single-column unique ID [usually INT, rarely GUID] on every table, so the web-form just needs ID and VersionNo to uniquely identify record and version, and we don’t have to cope with multi-part keys / weird Character Sets! in GET data (clustered index key(s) are different to ID when appropriate 🙂 )

    (We have a process that checks for any VersionNo approaching Max-Limit … but for normal tables that “Never occurs”)

    @Andy Levy “I have an `AFTER UPDATE` trigger to handle that requirement [set LastModified]”

    Never checked if it makes any difference to performance having multiple triggers, but we have a single AFTER trigger responsible for updating the rows-just-inserted/updated and also the insert-to-Archive-table (if anything in the DELETED table)

    It seems to have stood the test of time, but I’d appreciate any feedback.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.