Quantcast

SQL Server Application and Transact-SQL Performance

Get the WebProNews Newsletter:


[ Business]

Application and Transact-SQL Code Greatly Affect SQL Server Performance.

Of all the areas that can negatively affect the performance of SQL Server, the application code used to access SQL Server data, including Transact-SQL code, has the biggest potential of hurting performance. Unfortunately though, this is an area that a lot of DBAs don’t directly control. And because of this, this area if often neglected when performance tuning SQL Server-based application.

As with previous articles in this series, the purpose of this part of the audit is to catch the “easy” performance-related issues of your application and Transact-SQL code that accesses SQL Server data. Besides the tips listed here, there are a lot more factors that affect SQL Server’s performance, but the ones listed here are a good beginning.

Of course, if you are using third-party software, then this part of the performance audit doesn’t affect you as you can’t do much about the code. But if you have developed your own applications, or if the applications have been developed in-house, then you should be able to take part in this portion of the SQL Server performance audit.

As you review the audit items, and their discussion below, you will quickly discover that identifying some of these issues, or even fixing them, is no small task. Because of this, it is much better to build your applications with these performance tips in mind instead of having to fix them after the application has been written. You may want keep this article around when building new applications so that you build them for performance the first time around.

Transact-SQL Checklist

Does the Transact-SQL Code Return More Data Than Needed?

The less data returned by SQL Server, the less resources SQL Server needs to operate, helping to boost the overall performance of SQL Server. This may sound obvious, but returning unnecessary data is a performance problem that I see over and over.

Here are some of the most common mistakes made by coders when returning data from SQL Server that results in more data than necessary:

  • The absence of a WHERE clause. Unless you want to return all data from a table, which is a rare activity, the use of a WHERE clause is necessary to reduce the number of rows returned.
  • As an adjunct to the above advice, a WHERE clause needs to be a selective as possible. For example, if you only need to return records from a particular date, don’t return all the records for the month, or year. Design the WHERE clause so that exactly only those rows you need returned are returned, and not one extra row.
  • In the SELECT clause, only include those columns that you need, not all of them. Along the same line, don’t use SELECT *, as you will most likely be returning more rows that you need.
  • I will refer to this one again later on this page, but it also applies here. Don’t perform SELECTs against views, instead, bypass the view and get the data you need directly from the table. The reason for this is that many views (of course, not all) return more data than is necessary for the calling SELECT statement, which just ends up returning much more data than necessary.
  • In case you are not aware of them, here are some of the performance issues caused by returning unnecessary data: Sometimes, returning too much data forces the Query Optimizer to perform a table scan instead of an index lookup; extra I/O is needed to read data; buffer cache space is wasted, which could be better used by SQL Server for other purposes; unnecessary network traffic occurs; on the client, additional data has to be stored in memory which might be better used for other uses; and so on.

    Are Cursors Being Used When They Don’t Need to Be?

    Cursors of any kind slow SQL Server’s performance. While is some cases they cannot be avoided, in many cases they can. So if your application is currently using Transact-SQL cursors, see if the code can be rewritten to avoid them.

    If you need to perform row-by-row operations, consider using one or more of these options instead of using a cursor:

  • Use temp tables
  • Use WHILE loops
  • Use derived tables
  • Use correlated sub-queries
  • Use CASE statements
  • Use multiple queries
  • Each of these above options can substitute for a cursor, and they all perform much faster.

    If you can’t avoid using cursors, then at least try to speed them up. Find out how you can speed up cursors at this webpage.

    Are UNION and UNION SELECT Properly Used?

    Many people don’t fully understand how UNION and UNION SELECT work, and because of this, end up wasting a lot of SQL Server’s resources unnecessarily. When UNION is used, it performs the equivalent of a SELECT DISTINCT on the results set. In other words, a UNION will combine two similar recordsets, then search for potential duplicate records and eliminate them. If this is your goal, then using UNION is the correct syntax.

    But if the two recordsets you want to combine with the UNION will not have duplicate records, then using UNION is a waste of resources, because it will look for duplicates, even if they don’t exist. So if you know there are no duplicates in the recordsets you want to combine, then you will want to use UNION ALL, not UNION. UNION ALL combines recordsets, but does not search for duplicates, which reduces the use of SQL Server resources, boosting overall server performance.

    Is SELECT DISTINCT Being Used Properly?

    I have noticed that some developers automatically add the DISTINCT clause to the SELECT statement, whether or not it is needed. From a performance perspective, the DISTINCT clause should only be used when its special function is needed, which is to eliminate duplicate records from a recordset. This is because the use of the DISTINCT clause requires that the resultset be sorted and duplicates be eliminated, which uses up valuable SQL Server resources. Of course, if you need to do this, then do so. But if you know that the SELECT statement will never return duplicate records, then using the DISTINCT clause is just an unnecessary waste of SQL Server resources.

    Is the WHERE Clause Sargable?

    The term “sargable” (which is in effect a made-up word) comes from the pseudo-acronym “SARG”, which stands for “Search ARGument,” which refers to a WHERE clause that compares a column to a constant value. If a WHERE clause is sargable, this means that it can take advantage of an index to speed completion of the query. If a WHERE clause is non-sargable, this means that the WHERE clause (or at least part of it) cannot take advantage of an index, instead performing a table or index scan, which may cause the query’s performance to suffer.

    Non-sargable search arguments in the WHERE clause, such as “IS NULL”, “<>“, “!=”, “!>”, “!<", "NOT", "NOT EXISTS", "NOT IN", "NOT LIKE", and "LIKE '%500'" generally prevents (but not always) the query optimizer from using an index to perform a search. In addition, expressions that include a function on a column, expressions that have the same column on both sides of the operator, or comparisons against a column (not a constant), are not sargable.

    Not every WHERE clause that has a non-sargable expression in it is doomed to a table scan. If the WHERE clause includes both sargable and non-sargable clauses, then at least the sargable clauses can use an index (if one exists) to help access the data quickly.

    In many cases, if there is a covering index on the table, which includes all of the columns in the SELECT, JOIN, and WHERE clauses in a query, then the covering index can be used instead of a table scan to return a query’s data, even if it has a non-sargable WHERE clause. But keep in mind that covering indexes have their own drawbacks, such as often producing wide indexes that increase disk I/O when they are read.

    In some cases, it may be possible to rewrite a non-sargable WHERE clause into one that is sargable. For example, the clause:

    WHERE SUBSTRING(firstname,1,1) = 'm'

    can be rewritten like this:

    WHERE firstname like 'm%'

    Both of these WHERE clauses produce the same result, but the first one is non-sargable (it uses a function) and will run slow, while the second one is sargable, and will run much faster.

    If you don’t know if a particular WHERE clause is sargable or non-sargable, check out the query’s execution plan in Query Analyzer. Doing this, you can very quickly see if the query will be using indexes or table scans to return your results.

    With some careful analysis, and some clever thought, many non-sargable queries can be written so that they are sargable.

    Are Temp Tables Being Used When They Don’t Need to Be?

    While the use of temp tables has many practical uses, like the elimination of a cursor, they still incur overhead, and if that overhead can be eliminated, SQL Server will perform faster. For example, there are a variety of ways that temp tables use can be eliminated, which reduces overhead and boosts performance. Some of the ways to eliminate temp tables include:

  • Rewrite the code so that the action you need completed can be done using a standard query or stored procedure.
  • Use a derived table.
  • Use the SQL Server 2000 “table” datatype. These may, or may not be faster. You must test to be sure.
  • Consider using a correlated sub-query.
  • Use a permanent table instead.
  • Use a UNION statement to mimic a temp table.
  • Each of these options can often be used to help eliminate temp tables from your Transact-SQL code.

    Are Hints Being Properly Used in Queries?

    Generally speaking, the SQL Server Query Optimizer does a good job of optimizing queries. But in some rare cases, the Query Optimizer falls down on the job and a query hint is need to override the Query Optimizer in order to get the best performance out of a query.

    While hints can be useful in some situations, they can also be dangerous. Because of this, the use of hints should be done with great care.

    One of the biggest issues is inheriting some code that makes a big use of hints, especially code that was written for SQL Server 6.5 or SQL Server 7.0 and is now running under SQL Server 2000. In many cases, hints that were needed under previous versions of SQL Server aren’t applicable under newer versions, and their use can actually hurt, not help performance.

    In another case, perhaps hints were found to be useful early when an application was first rolled out, but as time passes and the “nature” of the data stored changes over time, once useful hints may no longer apply to the “newer” data, rendering them obsolete and potentially dangerous to performance.

    In both of these cases, it is a good idea to periodically revaluate the benefits of query hints being used. You may find that current hints aren’t useful at all, and in fact, hurt performance. And the only way to find this out is to test them in Query Analyzer and see what is actually happening, and then make your decision on whether to continue using them based on what you find out.

    Are Views Unnecessarily Being Used?

    Views are best used for handling security-related issues, not as a lazy developer’s method to store often-used queries. For example, if you need to allow a user adhoc access to SQL Server data, then you might consider creating a view for that user (or group), then giving that user access to the view, and not the underlying tables. On the other hand, from within your application, there is no good reason to SELECT data from views, instead, use Transact-SQL code to SELECT exactly what you want from the tables directly. A view adds unnecessary overhead, and in many cases, causes more data than necessary to be returned, which uses up unnecessary overhead.

    For example, let’s say that you have a view that returns 10 columns from 2 joined tables. And that you want to retrieve 7 columns from the view with a SELECT statement. What in effect happens is that the query underlying the view runs first, returning data, and then your query runs against the data returned by the query. And since you only need 7 columns, not the 10 columns that are returned by the view, more data than necessary is being returned, wasting SQL Server resources. The rule you should follow in your applications is to always access the base tables directly, not through a view.

    Are Stored Procedures Being Used Whenever Possible?

    Stored procedures offer many benefits to developers. Some of them include:

  • Reduces network traffic and latency, boosting application performance. For example, instead of sending 500 lines of Transact-SQL over the network, all that is need to be sent over the network is a stored procedure call, which is much faster and uses less resources.
  • Stored procedure execution plans can be reused, staying cached in SQL Server’s memory, reducing server overhead.
  • Client execution requests are more efficient. For example, if an application needs to INSERT a large binary value into an image data column not using a stored procedure, it must convert the binary value to a character string (which doubles its size), and send it to SQL Server. When SQL Server receives it, it then must convert the character value back to the binary format. This is a lot of wasted overhead. A stored procedure eliminates this issue as parameter values stay in the binary format all the way from the application to SQL Server, reducing overhead and boosting performance.
  • Stored procedures help promote code reuse. While this does not directly boost an application’s performance, it can boost the productivity of developers by reducing the amount of code required, along with reducing debugging time.
  • Stored procedures can encapsulate logic. You can change stored procedure code without affecting clients (assuming you keep the parameters the same and don’t remove any result sets columns). This saves developer time.
  • Stored procedures provide better security to your data. If you use stored procedures exclusively, you can remove direct SELECT, INSERT, UPDATE, and DELETE rights from the tables and force developers to use stored procedures as the method for data access. This saves DBA’s time.
  • As a general rule of thumb, all Transact-SQL code should be called from stored procedures.

    Inside Stored Procedures, is SET NOCOUNT ON Being Used?

    By default, every time a stored procedure is executed, a message is sent from the server to the client indicating the number of rows that were affected by the stored procedure. Rarely is this information useful to the client. By turning off this default behavior, you can reduce network traffic between the server and the client, helping to boost overall performance of your server and applications.

    To turn this feature off on at the stored procedure level, include the statement:

    SET NOCOUNT ON

    at the beginning of each stored procedure you write. This statement should be included in every stored procedure you write.

    Do Any of Your Stored Procedures Start with sp_?

    If you are creating a stored procedure to run in a database other than the Master database, don’t use the prefix “sp_” in its name. This special prefix is reserved for system stored procedures. Although using this prefix will not prevent a user defined stored procedure from working, what it can do is to slow down its execution ever so slightly.

    The reason for this is that by default, any stored procedure executed by SQL Server that begins with the prefix “sp_”, is first attempted to be resolved from within the Master database. Since it is not there, time is wasted looking for the stored procedure.

    If SQL Server cannot find the stored procedure in the Master database, then it next tries to resolve the stored procedure name as if the owner of the object is “dbo”. Assuming the stored procedure is in the current database, it will then execute. To avoid this unnecessary delay, don’t name any of your stored procedures with the prefix “sp_”.

    Are All Stored Procedures Owned by DBO, and Referred to in the Form of databaseowner.objectname?

    For best performance, all objects that are called from within the same stored procedure should all be owned by the same owner, preferably dbo. If they are not, then SQL Server must perform name resolution on the objects if the object names are the same but the owners are different. When this happens, SQL Server cannot use a stored procedure “in-memory plan” over, instead, it must re-compile the stored procedure, which hinders performance.

    When calling a stored procedure from your application, it is also important that you call it using its qualified name. Such as:

    EXEC dbo.myProcedure

    instead of:

    EXEC myProcedure

    Why? There are a couple of reasons, one of which relates to performance. First, using fully qualified names helps to eliminate any potential confusion about which stored procedure you want to run, helping to prevent bugs and other potential problems. But more importantly, doing so allows SQL Server to access the stored procedures execution plan more directly, and in turn, speeding up the performance of the stored procedure. Yes, the performance boost is very small, but if your server is running tens of thousands or more stored procedures every hour, these little time savings can add up.

    Are You Using Constraints or Triggers for Referential Integrity?

    Don’t implement redundant integrity features in your database. For example, if you are using primary key and foreign key constraints to enforce referential integrity, don’t add unnecessary overhead by also adding a trigger that performs the same function. The same goes for using both constraints and defaults or constraints and rules that perform redundant work. While this may sound obvious, it is not uncommon to find these issues in SQL Server databases.

    Are Transactions Being Kept as Short as Possible?

    Keep all Transact-SQL transactions as short as possible. This helps to reduce the number of locks (of all types), helping to speed up the overall performance of SQL Server. If practical, you may want to break down long transactions into groups of smaller transactions. Find out more on how to prevent unnecessary locking.

    Application Checklist

    Is the Application Using Stored Procedures, Strings of Transact-SQL Code, or Using an Object Model, like ADO, to Communicate With SQL Server?

    When an application needs to communicate with SQL Server, essentially it has three choices: it can use stored procedures, strings of Transact-SQL code, or it can use an object model’s properties and methods. From a performance perspective, the most efficient is stored procedures, and the least efficient are an object model’s properties and methods. Ideally, an application should use stored procedures only to access SQL Server.

    The advantages of stored procedures have already been described earlier in this article, so they won’t be repeated here. A close second are strings of Transact-SQL code sent to SQL Server. If written correctly, query execution plans are automatically reused, helping to boost performance, although you won’t get the benefits of a stored procedure, such as reducing network traffic. The problem with using a object model’s and methods is that they add an additional layer of code that slows down communications. In addition, often, but not always, the Transact-SQL code created by these properties and methods are not very efficient, further hurting performance.

    What Method is the Application Using to Communicate With SQL Server: DB-LIB, DAO, RDO, ADO, .NET?

    All applications need to use a data access library (MDAC component) in order to communicate with SQL Server, and there are several to choose from. For best performance, .NET should be your first choice. If you have not yet implemented .NET into your organization, then the next best choice is ADO. Under all conditions, avoid DB-LIB (discontinued but still supported) and DAO, both of which are very slow.

    Is the Application Using ODBC or OLE DB to Communication with SQL Server?

    If you have a choice between using ODBC or OLE DB to access your SQL Server database, choose OLE DB, as it is generally faster. In addition, using OLE DB allows you to use DSN-less connections, which perform connection maintenance faster than ODBC-based DSN-based connection.

    Is the Application Taking Advantage of Connection Pooling?

    Try to take advantage of “pooling” to reduce SQL Server connection overhead. Pooling is the term used to refer to the process where a client application can use a preexisting connection from a pool of available connections to connect to SQL Server without having to establish a new connection each time a new connection is required. This reduces SQL Server’s overhead and speeds up connections to SQL Server.

    Microsoft offers two types of pooling. Connection pooling is available through ODBC and can be configured by using the ODBC Data Source Administrator, the registry, or the calling application. Resource pooling is available through OLE DB, and can be configured through the application’s connection string, the OLE DB API, or the registry.

    Either connection pooling or resource pooling can be run for the same connection. Both pooling connections cannot be used for the same connection. Also, for connection pooling to work effectively, so that connections are reused, how security is implemented is critical. For a connection to be reused, the same security context must be used, or another connection is automatically opened, and connection pooling won’t work. Essentially, what this means is that all users who connect from the application to SQL Server must share the same user account. If they don’t, then each user will automatically open a new connection when they need to access SQL Server through the application.

    For maximum performance, will almost always want to take advantage of one or the other pooling methods when connecting to SQL Server.

    Is the Application Properly Opening, Reusing, and Closing Connections?

    Generally speaking, a connection to SQL Server should only be opened when it is needed, used, then immediately closed by the application. Assuming that you are using connection pooling and are using the proper security model, what will happen is that if a connection is not currently available, it will be created. And once the connection is closed by the application, it will remain open (although the application thinks that it is closed), available to be reused as needed.

    Reducing how often actual connections are opened and closed reduces SQL Server’s overhead. Also, by opening and the closing a connection quickly from an application, these allows pooled connections to be more efficiently reused, which also helps to reduce overhead, boosting performance.

    Is the Transact-SQL Code Being Sent to SQL Server Optimized for SQL Server, or is it Generic SQL?

    Some applications are designed to work with multiple databases, and because of this, use ANSI SQL instead of Transact-SQL to access SQL Server data. While this does make it easier to connect to a wide variety of different flavors of database, it also hurts performance. Transact-SQL offers some specific code that is not available in ANSI SQL which offers performance benefits. Ideally, for best performance, Transact-SQL should be used to access SQL Server, not generic ANSI SQL.

    Does the Application Return More Data from SQL Server Than it Needs?

    This is similar to one of the audit suggestions for Transact-SQL. Some applications, especially those that allow a user to browse data, return way too much data to the user, often allowing the application to further “limit” the data for the user’s benefit. For example, I have seen applications that essentially return all the rows in a table, or a view, to the application, where the application then sorts the data and allows the user to browse through it. If the number of rows is not large, this is OK. But if the number of rows is huge, let’s say 100,000 or more, then SQL Server has to produce a huge amount of work (generally a table scan) to return all this data, and the network is also flooded. No user will use all of this data. The application should be designed to only return that data the user really needs at that moment, and not one byte more.

    Another example of returning too much data includes applications that allow the user to specify the query criteria. If you must allow users to select their own criteria, it is important to prevent them from returning too many rows by accident. For example, you can use the TOP clause in your SELECT statement, or you can include default parameters for the WHERE clause to prevent users from returning every row in a table.

    Returning unneeded data is a terrible waste of resources and a problem that is easily avoided with a little planning.

    Does the Application Keep Transactions Open When the User is Modifying Data?

    This audit suggestion is also similar to one described above for Transact-SQL. One comment activity found in most applications is that a user is allowed to lookup a record, then update it. The key to doing this successfully is to ensure that when you allow a user to do this, that you don’t keep the connection open–and the record locked–as it is being updated. If you do, you can create unnecessarily long blocking locks that can hurt SQL Server’s performance.

    Ideally, from the application’s point of view, once the user specifies what record to update, the application should open the connection, select the record, and close the connection. Now the record is on the application’s screen. Once the user is done updating it, then the application needs to reopen the connection, update the modified record (assuming it was modified), and then close the connection. It is critical that transactions be kept as short as possible.

    Where Do I Go From Here

    If you made it all the way to here, I have to admire your sincere interest in boosting the performance of your SQL Server. As I have already mentioned, this part of the performance audit will be the most difficult. If fact, you may even decide to leave it to the last part of your audit, or if your applications are third-party, ignore it all together. But if your think your application is the root of your performance problems, and you have access to the code, then you may want to follow these audit suggestions, step-by-step.

    *Originally pulished at SQL-Server-Performance.com

    Brad M. McGehee is a full-time DBA with a large manufacturing company, and the publisher of http://www.SQL-Server-Performance.Com, a website specializing in SQL Server performance tuning and clustering.

    He is an MVP, MCSE+I, MCSD, and MCT (former).

    Brad also runs another website called http://www.WorldClassGear.com It provides independent gear reviews for backpackers, trekkers, and adventure travelers.

    SQL Server Application and Transact-SQL Performance
    Comments Off
    About Brad M. McGehee
    Brad M. McGehee is a full-time DBA with a large manufacturing company, and the publisher of http://www.SQL-Server-Performance.Com, a website specializing in SQL Server performance tuning and clustering.

    He is an MVP, MCSE+I, MCSD, and MCT (former).

    Brad also runs another website called http://www.WorldClassGear.com It provides independent gear reviews for backpackers, trekkers, and adventure travelers. WebProNews Writer
    Top Rated White Papers and Resources

    Comments are closed.

    • Join for Access to Our Exclusive Web Tools
    • Sidebar Top
    • Sidebar Middle
    • Sign Up For The Free Newsletter
    • Sidebar Bottom