INF: Client Effects on SQL Server Throughput

ID: Q180775


The information in this article applies to:


SUMMARY

In evaluating general areas that affect performance, the most commonly considered aspects are processor speed, disk I/O, and memory on the server. Although the performance of these parts of the server are crucial to proper performance, you must also consider network latency and client processing time as factors that can also have a major impact on the overall performance of the system.

This article discusses the latter areas and provides guidelines for evaluating what impact they may have on the server.


MORE INFORMATION

The following example will be used throughout the document. The steps for the two connections perform the same update with just a small difference in the Transact-SQL syntax.

Connection 1


use pubs
go
select convert(char(30), GetDate(), 9) "Start Time"
go

            Begin transaction

   Go   ==>   Send to SQL Server and process results

            Update authors set au_lname = au_lname

   Go   ==>   Send to SQL Server and process results

Commit / Rollback transaction

   Go   ==>   Send to SQL Server and process results

select convert(char(30), GetDate(), 9) "End Time"
go 

Connection 2


use pubs
go
select convert(char(30), GetDate(), 9) "Start Time"
go
begin transaction
if(0 = @@ERROR)
begin
   update authors set au_lname = au_lname
   if(0 = @@ERROR)
   begin
      commit transaction
   end
   else
   begin
      rollback transaction
   end
end
go    ==>   Send to SQL Server and process results
select convert(char(30), GetDate(), 9) "End Time"
go 

Network Round Trips

Connection 1 requires three trips to the SQL Server computer:
Connection 2 requires a single trip to complete the update.

Query Cancellation

Both the DB-Library and the ODBC APIs support asynchronous query processing. For example, DB-Library uses the dbdataready function to allow the client to poll the completion status of the query.

In DB-Library, the dbdataready function is controlled by the DataReadySleep value. For additional information about the DataReadySleep registry key, please see the following article in the Microsoft Knowledge Base:
Q159234 : INF: How to Change the Sleep Value Used by Dbdataready

How Sleep Times Affect the Timings

By default, the sleep value is 250 milliseconds.

Connection 1 makes three round trips to the SQL Server. By default, the client encounters a minimum of 750 milliseconds of wait time, not counting the time for the actual network transfer. The wait time is calculated from (250 milliseconds * 3) = 750 milliseconds.

Connection 2 makes a single trip and encounters a minimum of 250 milliseconds of wait time, not counting the time for actual network transfer.

You can change the speed of this example by a factor of three, simply by taking advantage of the Transact-SQL syntax and removing two network round trips.

How Network Roundtrips Affect Other Users

Connection 1 holds a transaction open for a minimum of 500 milliseconds. After the transaction is open, it takes 500 milliseconds to complete the update and then commit or rollback the transaction. Database concurrency prevents other users from accessing the records you are modifying.

Connection 2 keeps the transaction open only as long as necessary to complete the operation. On a 133-MHz Pentium single processor computer running both SQL Server and ISQL/w, the following timings are seen.

NOTE: The final Network I/O is not shown in either of the following examples. After the commit or rollback has completed the locks are released but the final I/O is not tallied.

   Begin transaction                5 milliseconds
   Update                          20 milliseconds
   Commit/Rollback transaction      7 milliseconds
      TOTAL                        32 milliseconds 

Connection 2 will complete in approximately 32 milliseconds, whereas Connection 1 requires a much larger processing window and greatly extends the transaction latency time.

   Begin transaction                5 milliseconds
   Network I/O                    250 milliseconds
   Update                          20 milliseconds
   Network I/O                    250 milliseconds
   Commit/Rollback transaction      7 milliseconds
      TOTAL                       532 milliseconds 

As shown earlier, the network time is a simple factor of three. However, the locking impact that the example imposes on other database users is a factor of 16 (532/32 = ~16).

Now let's say that this simple example is from a remote portable computer connecting with a 28.8 modem. In addition to the 250 milliseconds delay imposed by the dbdatareadysleep parameter, the time taken to actually transmit the information over the slow link is appreciable. Connection 1 would affect other database users by an even larger factor, while Connection 2 would primarily affected by the speed of the client computer. The command is sent once, processed at the SQL Server in 32 milliseconds. The only user of the system that experiences a slowdown is the remote user, which is as expected, due to slow modem.

Client Lag Time

Client lag time is the period of time that elapses while the client processes the results which it received. If you again look at Connection 1, you can quickly see how this can affect the process. If an extra 10 milliseconds are needed for the client to handle a result set, you can add another 30 milliseconds to the overall transaction time and yet another 20 milliseconds to the transaction latency time.

Let's switch examples again. In this case there is an inventory table from an online system. You have spent months developing and installing what should be the fastest online order processing system in history. The users can search, buy, and keep a shopping cart, among other options. This is the tbllnventory table:

   tblInventory
      iProductID       int
      strTitle         varchar(50)
      strDescription   varchar(255)
      iSize            int
      iInStock         int
      iOnOrder         int
      iType            int 

I want to purchase some cereal. However, I would like to see what is available. We can define cereal as type 2, so that the application issues the following query. In this example, the database contains 750 cereal- related items.

   Select strTitle, strDescription, iSize, iInStock from tblInventory
   where iType = 2 

SQL Server will compile and parse the query and then begin to return the results. Shared locks are acquired on the appropriate pages. Remember that shared locks block update, insert, and delete operations.

At the same time, because your application is used nationwide, six other people are trying to place cereal orders.

SQL Server fills the first tabular data stream (TDS) packet, sends it to the client, and then waits for the client to process the results. During the time that the client is processing the results (client latency time), SQL Server continues to hold a shared page lock on the page where it was processing. This shared lock can block a user who is attempting to complete an order.

It seems like a simple action. Select a result set from the SQL Server and insert the values into a list box. A 133-MHz Pentium computer can add 750 items to a list box in just over one second. Disabling the list box while filing it takes only one-third of a second. You can significantly decrease the client latency time by simply disabling the list box.

You might even be inclined to change the select operation to further reduce the locking. Limit the shared lock exposure by changing the query to the following.

   Insert * into #tblSelect from
   Select strTitle, strDescription, iSize, iInStock from tblInventory 


   Select * from #tblSelect 

The query is isolated on the SQL Server and will not start returning results until they have been moved to the temporary table and all shared locks are released from the inventory table. This limits the time that shared locks are held on the inventory table to the time required for SQL Server to move the results to tempdb. The control is again with the database and not the client.

Another way to accomplish similar behavior is to make a "smart" client. Instead of filling a list box, it may be faster to load an array. However, you still have concerns about being bound by network throughput. The temporary table is a better solution in these situations.

As you can see, the client can play a pivotal roll in the database throughput. You should be especially careful when working with remote and reporting systems. The amount of time that the client takes to process results while holding locks has the potential to impact the database throughput. These types of issues may be hard to see as the latency periods may be timings of 100 milliseconds and difficult to see with the sp_who stored procedure. Use a slow link to quickly see the behavior. Run the application from a RAS link and see what the overall behavior is like. You can also take full advantage of the SQL Trace utility to carefully profile the application.

For additional information, please see the following articles in the Microsoft Knowledge Base:
Q165951 : INF: Result Processing for SQL Server

Q172117 : INF: How to Profile Transact-SQL Code in Stored Procedures and Triggers

Q162361 : INF: Understanding and Resolving SQL Server Blocking Problems

Q167610 : INF: Assessing Query Performance Degradation

Q48712 : INF: Handling Timeouts Correctly in DB-Library

Q117143 : INF: When and How to Use dbcancel() or sqlcancel()

Additional query words: performance kbps WAN


Keywords          : SSrvGen 
Version           : Windows:6.5
Platform          : WINDOWS 
Issue type        : kbinfo 

Last Reviewed: April 14, 1999