Quantcast

Using Profiler to Identify Poorly Performing Queries

Get the WebProNews Newsletter:
[ Business]

Identifying Long Running Queries is First Step

At this step in the SQL Server performance audit, you should have identified all the “easy” performance fixes.

Now it is time to get your hands a little dirtier and identify queries (including stored procedures) than run longer than they should, and use up more than their fare share of SQL Server resources.

Slow running queries are ones that take too long to run. So how long is too long? That is a decision you have to make. Generally speaking, I use a cutoff of 5 seconds. In other words, any query running 5 seconds or less is generally fast enough, while queries that take longer than 5 seconds to run are long running. This is an arbitrary decision you have to make. In the company where I work, the report writers, who are the ones who write most of the queries that are run against our databases have a different standard than I have. They only consider a query to be long running if it takes more than 30 seconds to run. So, one of your first steps is to determine what you think a long running query is, and then use this as your standard during this portion of the performance audit.

We don’t have unlimited time to tune queries. All we can do is to identify those queries that need the most work, and then work on them. And if we do have time, then we can focus on those queries that are less critical (but still troublesome) to the overall performance of our SQL Servers. Also keep in mind that sometimes, no matter how hard you try to tune a particular query, that there may be little or nothing you can do to improve the performance of a particular query.

Before You Begin

For this part of the performance audit, you will be using the SQL Profiler tool that comes with SQL Server. As this article focuses on how to perform a performance audit, and not on how to use tools, it is assumed that you know how to use SQL Profiler. If you have not used it before, check out the SQL Server Books Online to get you started on the basics of how to use it.

Before you begin using Profiler to capture the query activity in your SQL Servers, keep the following in mind:

  • Don’t run the Profiler on the same server you are monitoring, this can noticeably, negatively affect the server’s performance. Instead, run it on another server or workstation, and collect the data there.
  • When running the Profiler, do not select more data than you need to collect. The more you collect, the more resources are used to collect them, slowing down performance. Only select those events and data columns you really need. I will make recommendation on exactly what to collect shortly.
  • Collect data over a “typical” production time, say over a typical 3-4 hour production period. This may vary, depending on how busy your server is. If you don’t have a “typical” production time, you may have to collect data over several different periods of a typical production day to get all the data you need.
  • When you use Profiler, you have two options of how to “set it up.” You can choose to use the GUI Profiler interface, or if you like, you can use the built-in Profiler system stored procedures. While using the GUI is somewhat easier, using the stored procedures to collect the data incurs slightly less overhead. In this article, we will be using the GUI interface.

    What Data to Collect

    Profiler allows you to specify which events you want to capture and which data columns from those event to capture. In addition, you can use filters to reduce the incoming data to only what you need for this specific analysis. Here’s what I recommend:

    Events to Capture

  • Stored Procedures–RPC:Completed
  • TSQL–SQL:BatchCompleted
  • You may be surprised that only two different events need to be captured: one for capturing stored procedures and one for capturing all other Transact-SQL queries.

    Data Columns to Capture

  • Duration (data needs to be grouped by duration)
  • Event Class
  • DatabaseID (If you have more than one database on the server)
  • TextData
  • CPU
  • Writes
  • Reads
  • StartTime (optional)
  • EndTime (optional)
  • ApplicationName (optional)
  • NTUserName (optional)
  • LoginName (optional)
  • SPID
  • The data you want to actually capture and view includes some that are very important to you, especially duration and TextData; and some that are not so important, but can be useful, such as ApplicationName or NTUserName.

    Filters to Use

  • Duration > 5000 milliseconds (5 seconds)
  • Don’t collect system events
  • Collect data by individual database ID, not all databases at once
  • Others, as appropriate
  • Filters are used to reduce the amount of data collected, and the more filters you use, the more data you can filter out that is not important. Generally, I use three filters, but others can be used, as appropriate to your situation. And of these, the most important is duration. I only want to collect information on those that have enough duration to be of importance to me, as we have already discussed.

    Collecting the Data

    Depending on the filters you used, and the amount of time you run Profiler to collect the data, and how busy your server is, you may collect a lot of rows of data. While you have several choices, I suggest you configure Profiler to save the data to a file on you local computer (not on the server you are Profiling), and not set a maximum file size. Instead, let the file grow as big as it needs to grow. You may want to watch the growth of this file, in case it gets out of hand. In most cases, if you have used appropriate filters, the size should stay manageable. I recommend using one large file because it is easier to identify long running queries if you do.

    As mentioned before, collect your trace file during a typical production period, over a period of 3-4 hours or so. As the data is being collected, it will be sorted for you by duration, with the longest running queries appearing at the bottom of the Profiler window. It can be interesting to watch this window for awhile while you are collecting data. If you like, you can configure Profiler to automatically turn itself off at the appropriate time, or you can do this manually.

    Once the time is up and the trace stopped, the Profiler trace is now stored in the memory of the local computer, and on disk. Now you are ready to identify those long running queries.

    Analyzing the Data

    Guess what, you have already identified all queries that ran during the trace collection that exceed your specified duration, whatever it was. So if you selected a duration of 5 seconds, you will only see those queries that took longer than five seconds to run. By definition, all the queries you have captured need to be tuned. “What! But over 500 queries were captured! That’s a lot of work!” It is not as bad as you think. In most cases, many of the queries you have captured are duplicate queries. In other words, you have probably captured the same query over and over again in your trace. So those 500 captured queries may only be 10, or 50, or even 100 distinct queries. On the other hand, there may be only a handful of queries captured (if you are lucky).

    Whether you have just a handful, or a lot of slow running queries, you next job is to determine which are the most critical for you to analyze and tune first. This is where you need to set priorities, as you probably don’t have enough time to analyze them all.

    To prioritize the long running queries, you will probably want to first focus on those that run the longest. But as you do this, keep in mind how often each query is run.

    For example, if you know that a particular query is for a report that only runs once a month (and you happened to have captured it when it was running), and this query took 60 second to run, it probably is not as high as a priority to tune as a query that takes 10 seconds to run, but runs 10 times a minute. In other words, you need to balance the length of how long a query takes to run, to how often it runs. With this in mind, you need to identify and prioritize those queries that take the most physical SQL Server resources to run. Once you have done this, then you are ready to analyze and tune them.

    Analyze Queries by Viewing Their Execution Plans

    To analyze the queries that have captured and prioritized, you will need to move the code to Query Analyzer in order to view the execution plan, so that it can be analyzed. As the focus of this article is on auditing, not analysis, we won’t spend the time here to show you how to analyze specific queries. This is a very large subject unto itself, and is covered in different parts of the SQL-Server-Performance website.

    How you move the code to Query Analyzer for analysis depends on the code. If the code you have captured is Transact-SQL, you can cut and paste it directly into Query Analyzer for analysis. But if the code you have captured is within a stored procedure, you have to do a little more work, because the Profiler does not show what the code in the Stored Procedure looks like, but only shows the name of the stored procedure, along with any parameters that were passed along to it. In this case, to analyze the query in Query Analyzer, you must go to the stored procedure in question, and cut and paste the code to Query Analyzer. Then, assuming there were parameters passed to it, you will have to manually modify the code from the stored procedure so that it will run with the parameters found when it was captured by Profiler.

    Now the time-consuming chore begins, and that is the analysis of each query’s execution plan to see if there is any way the query can be tuned for better performance. But because you have now identified and prioritized these problematic queries, your time will be much more efficiently spent.

    *Originally published at SQL-Server-Performance.com

    Brad M. McGehee is a full-time DBA with a large manufacturing company, and the publisher of http://www.SQL-Server-Performance.Com, a website specializing in SQL Server performance tuning and clustering.

    He is an MVP, MCSE+I, MCSD, and MCT (former).

    Brad also runs another website called http://www.WorldClassGear.com It provides independent gear reviews for backpackers, trekkers, and adventure travelers.

    Using Profiler to Identify Poorly Performing Queries
    Comments Off
    About Brad M. McGehee
    Brad M. McGehee is a full-time DBA with a large manufacturing company, and the publisher of http://www.SQL-Server-Performance.Com, a website specializing in SQL Server performance tuning and clustering.

    He is an MVP, MCSE+I, MCSD, and MCT (former).

    Brad also runs another website called http://www.WorldClassGear.com It provides independent gear reviews for backpackers, trekkers, and adventure travelers. WebProNews Writer
    Top Rated White Papers and Resources

    Comments are closed.