Friday. Log Out! Shut down! Get out!

Web Performance testing tools

November 28, 2011by Daniel Ighișanin Development2 Comments

It ‘just works’ is not enough — it must work well! So here is a list of some benchmarking tools and some useful info and links to help you trough the process.

1. ab – Apache

Or how to performance Benchmark a Web server by using ab from Apache. You can use it to benchmark Apache, IIS and other web servers. It is designed to give you an impression of how your current Apache installation performs, meaning how many requests per second your Apache installation is capable of serving.

To benchmark a web server the time it will take to give a page is not important, what is important is the average time it will take when you have a maximum number of users on your site simultaneously.


  • You need to use same hardware configuration and kernel (OS) for all tests
  • You need to use same network configuration. For example, use 100Mbps port for all tests
  • First record server load using top or uptime command
  • Take at least 3-5 readings and use the best result
  • After each test reboot the server and carry out test on next configuration (web server)
  • Again record server load using top or uptime command
  • Carry on test using static html/php files and dynamic pages
  • It also important to carry out test using the Non-KeepAlive and KeepAlive (the Keep-Alive extension to provide long-lived HTTP sessions, which allow multiple requests to be sent over the same TCP connection) features


Note down server load using uptime command

$ uptime

Create a static (small) html page (use your own webroot):

Login to Linux/bsd desktop computer and type following command: (use your own server IP)

$ ab -n 1000 -c 5


  • -n 1000: ab will send 1000 number of requests to server in order to perform for the benchmarking session
  • -c 5 : 5 is concurrency number i.e. ab will send 5 number of multiple requests to perform at a time to server


For example if you want to send 10 requests, type following command:

$ ab -n 10 -c 2

Repeat above command 3-5 times and save the best reading.

Please note that 1000 request is a small number you need to send bigger (i.e. the hits you want to test) requests, you should consider a larger number, like 50000.

How do I save result as a Comma separated value?

Use -e option that allows to write a comma separated value (CSV) file which contains for each percentage (from 1% to 100%) the time (in milliseconds) it took to serve that percentage of the requests:

$ ab -k -n 50000 -c 2 -e apache2r1.csv

How do I import result into excel or gnuplot programs so that I can create graphs?

Use above command or -g option as follows:

$ ab -k -n 50000 -c 2 -g apache2r3.txt

You can find out more by accessing

2. Profile queries in MySql

Your queries will probably follow the 90/10 rule. 90% of your work will be caused by 10% of the queries. You need to profile your queries so you know how they really perform.

MySQL provides two main tools for understanding query performance: EXPLAIN and SHOW STATUS.


It shows the estimated execution plan of a SELECT query. It explains how indexes will be used, in what order a join is performed, estimated number of rows accessed, and so forth. Together with execution time, this is a good first approximation to a query’s performance.


  • you can only use it with SELECT queries.
  • you’re only getting an estimate. This is based on MySQL’s index statistics and whatever else it can learn about the query and tables at query compile and optimization time.


Displays MySQL’s internal counters. MySQL increments these as it executes each query. For instance, every time the query handler advances from one entry to the next in an index, it increments a counter. One thing you can use these counters for is to get a sense of what types of operations your server does in aggregate (see the excellent mysqlreport tool for help with this).

Another use is to figure out how much work an individual query did. If you run SHOW STATUS, execute a query, and run SHOW STATUS again you can see how much the counters have incremented, and thus how much work the query did.

You can also see how non-SELECT queries perform

The technique

You  do the following:

  1. Run the query a few times to “warm up” the server.
  2. Run SHOW STATUS and save the result.
  3. Run the query.
  4. Run SHOW STATUS again and get the differences from the first time I ran it.
  5. Optionally, if you are on  a quiet server, subtract the work SHOW STATUS itself causes (don’t do this on a busy server). Run SHOW STATUS twice and subtract each variable to get a baseline, then subtract this amount from the results you got above.

Let’s look at how to analyze the numbers.

How to analyze the results

I would break the results down into logical sections as follows:

  1. Overall
  2. Table, index, and sorting
  3. Row-level operations
  4. I/O operations
  5. InnoDB operations, if applicable

First, two important overall measurements are the query’s time and if available, Last_query_cost. These two numbers can give you a high-level view of a query performance.

Next, look at how the query affected tables, indexes, files and sorting. To start with, look at the Select_ variables to see how many table and index scans you had, and how many range scans and joins with or without checks. The Sort_ variables tell you more about sorting. You’re striving for as few table and index scans as possible, and it’s best to sort as few rows as possible. By the way, you should also examine EXPLAIN to see what kind of sorting is used (for example, index sorts may be better than filesorts).

Row operation statistics come from the Handler_ variables. You can see not only reads, but writes as well. Sometimes you’ll see a lot of Handler_write events even in a plain SELECT query. This happens while the handler generates the result set — it doesn’t necessarily mean rows in your base tables got updated. GROUP BY queries that have to accumulate a result set are a typical scenario. Temporary tables are another, and sometimes results are materialized as intermediate temporary tables. Subqueries in the FROM clause are an example.

The fewer writes, the better — unless those writes enable many fewer reads. For example, materializing an intermediate temporary table and writing to it can save a lot of reads in grouped queries. If you rewrite a correlated, grouped subquery as a grouped subquery in the FROM clause, you only have to do the GROUP BY against the base table once, as opposed to the correlated subquery, which will probe into the base table once for every row in the outer table. In that case, the writes to the temporary table are a good trade-off. But don’t take my word for it, profile some queries and see!

I/O operations include the Key_ and Created_ variables, which tell you how much index, temp table, and temp file I/O happened. This is where you’ll see the temporary tables I just mentioned. Temp files may be the result of filesort operations. Key_read_requests and Key_write_requests tell you how many times the server asked to read or write a key block from or to the key cache. Key_reads and Key_writes tell you how often the operation had to go to disk (i.e. fetching more data from and index, or flushing an index write to disk). If you are using indexes well, it is normal to see high request values here. If your server is configured well, it is normal for virtually 100% of key read requests to be satisfied from the cache, and not have to go to disk. Even if the server isn’t configured well, each key read request should bring a block of the index into memory, which can be used to satisfy some subsequent number of read requests, so if you are seeing much less than 100% key cache hit, something is very wrong.

3. Speed Tracer (by Google)

Speed Tracer is a tool to help you identify and fix performance problems in your web applications. It visualizes metrics that are taken from low level instrumentation points inside of the browser and analyzes them as your application runs.

4. Zend_Db_Profiler

Zend_Db_Profiler can be enabled to allow profiling of queries.

It includes:

  • queries processed by the adapter
  • elapsed time to run the queries

If you want to learn more follow this link:

5. xdebug

A PHP extension designed to profile your website and also debug it, the swiss army knife tool for PHP developers.

Xdebug’s basic functions include the display of stack traces on error conditions, maximum nesting level protection and time tracking.



pecl install xdebug

If the PECL installation does not work for you, you need to compile xdebug from source.

tar -xzf xdebug-2.0.1.tgz
cd xdebug-2.0.1
./configure --enable-xdebug --with-php-config=/usr/bin/php-config
cp modules/ /usr/lib/apache2/modules/

On Windows

If you are a Windows user, you can download a compiled DLL from
Select the PHP version you are using and click the appropriate link in the Windows modules section in the right column of the page.

Then put the downloaded DLL into PHP’s extension directory ext, which should be a subdirectory of your PHP directory. You can put the DLL to any directory, provided that you state the full path to the DLL in php.ini.

Here is a link with a tutorial on how to proper test with xdebug.

6. PHP_Debug

The basic purpose of PHP_Debug is to provide assistance in debugging PHP code:

  • program trace
  • variables display
  • process time
  • included files
  • queries executed
  • watch variables

These informations are gathered through the script execution and therefore are displayed at the end of the script (in a nice floating div or a html table) so that it can be read and used at any moment.

You can find more informations here:


Daniel Ighișan