SQL Procedural Language (T-SQL) Performance Considerations

SQL-Procedural-Language-T-SQL-Performance-Considerations

In this post, we discuss what are the things we need to keep in mind while stored procedures in SQL Server when it comes to performance and best practice.

Below points are applicable not only for SQL Server but also for other RDBMS SQL also like Oracle, Sybase, Greenplum..etc.

1. Write comments in your stored procedures, triggers, and SQL batches generously, whenever something is not very obvious. This helps other programmers understand your code clearly.

Don’t worry about the length of the comments, as it won’t impact the performance, unlike interpreted languages like ASP 2.0.

2. Do not use SELECT * in your queries. Always write the required column names after the SELECT statement, like:

This technique results in reduced disk I/O and better performance.

3. Try to avoid server-side cursors as much as possible. Always stick to a ‘set-based approach’ instead of a ‘procedural approach’ for accessing and manipulating data.

Cursors can often be avoided by using SELECT statements instead. If a CURSOR is unavoidable, use a WHILE loop instead.

4. Try to avoid wildcard characters at the beginning of a word while searching using the LIKE keyword, as that results in an index scan, which defeats the purpose of an index.

The following statement results in an index scan, while the second statement results in an index seek:

Also, avoid searching using not equals operators (<> and NOT) as they result in the table and index scans.

Read more: SQL Server ‘DataType’ Performance Considerations

5. Use ‘Derived tables’ wherever possible, as they perform better. Consider the following query to find the second highest salary from the Employees table:

The same query can be re-written using a derived table, as shown below, and it performs twice as fast as the above query:

This is just an example, and your results might differ in different scenarios depending on the database design, indexes, volume of data, etc. So, test all the possible ways a query could be written and go with the most efficient one.

6. Prefix the table names with the owner’s name, as this improves readability and avoids any unnecessary confusion.

Microsoft SQL Server Books Online even states that qualifying table names with owner names help in execution plan reuse, further boosting performance.

7. Use SET NOCOUNT ON at the beginning of your SQL batches, stored procedures, and triggers in production environments, as this suppresses messages like ‘(1 row(s) affected)’ after executing INSERT, UPDATE, DELETE and SELECT statements.

This improves the performance of stored procedures by reducing network traffic.

8. Use the more readable ANSI-Standard Join clauses instead of the old-style joins. With ANSI joins, the WHERE clause is used only for filtering data. Whereas with older style joins, the WHERE clause handles both the join condition and filtering data.

The first of the following two queries show the old-style join, while the second one shows the new ANSI join syntax:

9. Views are generally used to show specific data to specific users based on their interests. Views are also used to restrict access to the base tables by granting permission only on views.

Yet another significant use of views is that they simplify your queries. Incorporate your frequently required, complicated joins and calculations into a view so that you don’t have to repeat those joins/calculations in all your queries. Instead, just select from the view.

10. Always use a column list in your INSERT statements. This helps in avoiding problems when the table structure changes (like adding or dropping a column). Here’s an example that shows the problem.

Consider the following table:

Here’s an INSERT statement without a column list, that works perfectly:

INSERT INTO EuropeanCountries VALUES (1, ‘Ireland’);

Now, let’s add a new column to this table:

ALTER TABLE EuropeanCountries ADD EuroSupport bit 

Now run the above INSERT statement. You get the following error from SQL Server:

This problem can be avoided by writing an INSERT statement with a column list as shown below:

INSERT INTO EuropeanCountries (CountryID, CountryName) VALUES (1, ‘England’);

11. Perform all your referential integrity checks and data validations using constraints (foreign key and check constraints) instead of triggers, as they are faster. Limit the use triggers only for auditing, custom tasks, and validations that cannot be performed using constraints.

Constraints save you time as well, as you don’t have to write code for these validations, allowing the RDBMS to do all the work for you.

12. Always access tables in the same order in all your stored procedures and triggers consistently. This helps in avoiding deadlocks. Other things to keep in mind to avoid deadlocks are: Keep your transactions as short as possible.

Touch as little data as possible during a transaction. Never, ever wait for user input in the middle of a transaction.

Do not use higher-level locking hints or restrictive isolation levels unless they are absolutely needed. Make your front-end applications deadlock-intelligent, that is, these applications should be able to resubmit the transaction in case the previous transaction fails with error 1205.

In your applications, process all the results returned by SQL Server immediately so that the locks on the processed rows are released, hence no blocking.

13. Offload tasks, like string manipulations, concatenations, row numbering, case conversions, type conversions, etc., to the front-end applications if these operations are going to consume more CPU cycles on the database server.

Also, try to do basic validations in the front-end itself during data entry. This saves unnecessary network roundtrips.

However, this is not the final list when it comes to the performance of procedural language. Please let me know if you have any other points in the comment section, I will try to add them to this post. Thank you!

Comments

comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: