For those not using SQL Server 2012 or above, a cursor is likely the most efficient supported and guaranteed method outside of CLR. There are other approaches such as the “quirky update” which can be marginally faster but not guaranteed to work in the future, and of course set-based approaches with hyperbolic performance profiles as the table gets larger, and recursive CTE methods that often require direct #tempdb I/O or result in spills that yield roughly the same impact.
INNER JOIN – do not do this:
The slow, set-based approach is of the form:
The reason this is slow? As the table gets larger, each incremental row requires reading n-1 rows in the table. This is exponential and bound for failures, timeouts, or just angry users.
Correlated subquery – do not do this either:
The subquery form is similarly painful for similarly painful reasons.
Quirky update – do this at your own risk:
The “quirky update” method is more efficient than the above, but the behavior is not documented, there are no guarantees about order, and the behavior might work today but could break in the future. I’m including this because it is a popular method and it is efficient, but that doesn’t mean I endorse it. The primary reason I even answered this question instead of closing it as a duplicate is because the other question has a quirky update as the accepted answer .
This first one relies on TID to be contiguous, no gaps:
If you can’t rely on this, then you can use this variation, which simply builds a contiguous sequence using ROW_NUMBER() :
Depending on the size of the data (e.g. columns we don’t know about), you may find better overall performance by stuffing the relevant columns only in a #temp table first, and processing against that instead of the base table:
Only the first CTE method will provide performance rivaling the quirky update, but it makes a big assumption about the nature of the data (no gaps). The other two methods will fall back and in those cases you may as well use a cursor (if you can’t use CLR and you’re not yet on SQL Server 2012 or above).
Everybody is told that cursors are evil, and that they should be avoided at all costs, but this actually beats the performance of most other supported methods, and is safer than the quirky update. The only ones I prefer over the cursor solution are the 2012 and CLR methods (below):
SQL Server 2012 or above
New window functions introduced in SQL Server 2012 make this task a lot easier (and it performs better than all of the above methods as well):
Note that on larger data sets, you’ll find that the above performs much better than either of the following two options, since RANGE uses an on-disk spool (and the default uses RANGE). However it is also important to note that the behavior and results can differ, so be sure they both return correct results before deciding between them based on this difference.
For completeness, I’m offering a link to Pavel Pawlowski’s CLR method, which is by far the preferable method on versions prior to SQL Server 2012 (but not 2000 obviously).
If you are on SQL Server 2012 or above, the choice is obvious – use the new SUM() OVER() construct (with ROWS vs. RANGE ). For earlier versions, you’ll want to compare the performance of the alternative approaches on your schema, data and – taking non-performance-related factors in mind – determine which approach is right for you. It very well may be the CLR approach. Here are my recommendations, in order of preference:
- SUM() OVER(). ROWS. if on 2012 or above
- CLR method, if possible
- First recursive CTE method, if possible
- The other recursive CTE methods
- Quirky update
- Join and/or correlated subquery
For further information with performance comparisons of these methods, see this question on http://dba.stackexchange.com :
I’ve also blogged more details about these comparisons here:
Also for grouped/partitioned running totals, see the following posts: