Sometimes in our lives, we all have pain, we all have sorrow. And sometimes we also have to launch Drupal sites into the wild blue yonder. It's during these times that we separate the grown-ups from the n00bs, and we see how well our site performs under heavy load. Many of us didn't need to worry about speed, page size, and server load in our younger years when we were building sites for Uncle Don and Aunt Sue, but eventually you get that big client, and you need some help.
Testing your site's performance
There are several ways to test, and a few metrics to acquaint yourself with. Not all metrics are created equal, but all of them are important at one time or other. In Part I of this post, you will be reading about testing with the Apache Benchmark tool on the command line.
Useful Vocabulary Words
The concepts involved with testing a site don't always overlap with developing, administering and theming a site, so here is a glossary of useful performance vocabulary words. You don't have to memorize the following terms, but you may want to bookmark this page, and come check out this list if you get stuck.
- Execution Time
- The amount of time your Apache server takes to create the output of the page.
- Page Load Time
- The duration from when the client sends a request to the server, until it receives the end of the page.
- Document Size or Length
- Usually measured in Kilobytes, the physical size of the HTML source of the page.
- Rendered Document Size
- HTTP Request
- HTTP Headers
- The part of a Request that humans don't see; usually includes information about the document, the user's session, and other things useful to the client or server; HTTP Headers are sent on the Request and the Response.
- HTTP Response
- In addition to returning a document, a Response may also be an error message or error page - like 404 (Not Found), 301 (Redirect), 200 (OK), etc.
- Server Utilization
- Many sites will measure their utilization during normal times of traffic, but also during spikes in traffic. "Yesterday we were Dugg, and all the CPUs only got to 80%. Can you believe it?"
- Slow Query Log
- Not all database queries are created equally, and your MySQL database server has a feature to help you locate queries that take too long and write them to a log file. This is useful for figuring out if you may need a new Index (see below) or if your querying skills are not ninja-level.
- Database Indexes
- If you're querying a particular row in a database table without indexes, the database server has to work a lot harder to find that specific row. On the other hand, indexing every row wouldn't be good either. This is one of those "moderation" instances.
- CDN or Content Delivery Network
Some tools for testing site performance and debugging issues work from inside your browser. These should be at the top of your list.
- This suite of modules includes tools for database query profiling, page generation time, memory usage, and more.
- YSlow for Firebug
- YSlow analyzes web pages and tells you why they're slow based on the rules for high performance web sites. Your page gets an average score of A-F, just like in school!
- Xdebug Debugger and Profiler Tool for PHP
- A PHP extension for powerful debugging. It supports stack and function traces, profiling information and memory allocation and script execution analysis.
If you are comfortable with the UNIX command line (and you should not be afraid of the shell), you will likely already know several of these tools. Learn them, love them.
- Opcode Caching
- A wide array of tools that work by caching the compiled bytecode of PHP scripts to avoid the overhead of parsing and compiling source code on each request. APC and XCache are two popular examples.
- ab (Apache Benchmark)
- A tool to help you determine how your web server will perform in times of heavy load. More on this later in the article.
- htop interactive process-viewer for Linux
- top is an old-school command-line tool for looking at your system's processes, htop is a fancier version. This is a nice way to see your Server Utilization.
- iostat for Disk Input/Output
- A system monitoring tool used to collect and show storage input and output statistics. If your application is writing to disk often, this can be a bottleneck.
- netstat for Network I/O
- A tool used for finding problems in the network and to determine the amount of traffic on the network as a performance measurement. This refers specifically to the traffic in your data center.
- free for Memory Utilization
- This tool displays the total amount of free and used physical and swap memory in the system. If you are using too much memory, you may need to upgrade your server or get more servers.
Testing with ab
There are a few kinds of tests you can perform: usually a fixed number of total requests or variable connections over a fixed amount of time. To answer, "How may requests per minute can we handle?", you want to fix the time to 60 seconds. To simulate being "Dugg" or "slashdotted", you want to increase the number of concurrent connections and see how fast your server can return several thousand pages.
To count requests over a certain period of time, turn off your page caching and use something like:
$ ab -kc 10 -t 60 http://localhost/
To count time taken to return a number of complete requests, use:
$ ab -n 1000 -kc 10 http://localhost/
- The -n is the number of requests to send, no matter how long it takes.
- The -c is the number of requests to send at one time, or concurrency.
- The -t is the amount of time to send requests, in seconds.
- The -k tells ab to use the HTTP KeepAlive feature.
If you want to save the output, dump it to a file:
$ ab [some flags] > localhost.txt
The information ab feeds to you is one of the metrics that can help you get a high-level view of how your site and server will perform when they are under heavy load. If you have a particular page you care about, say example.com/store, you should be testing against that page, not your home page.
Understanding the output of ab
This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking localhost (be patient) Finished 10 requests Server Software: Apache/2.2.9 Server Hostname: localhost Server Port: 80 Document Path: / Document Length: 23810 bytes Concurrency Level: 10 Time taken for tests: 30.039 seconds Complete requests: 10 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 243310 bytes HTML transferred: 238100 bytes Requests per second: 0.33 [#/sec] (mean) Time per request: 30039.154 [ms] (mean) Time per request: 3003.915 [ms] (mean, across all concurrent requests) Transfer rate: 7.91 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 11 16.8 1 39 Processing: 29913 29958 34.6 29952 30038 Waiting: 29162 29219 26.1 29237 29237 Total: 29951 29969 27.8 29952 30039 Percentage of the requests served within a certain time (ms) 50% 29952 66% 29977 75% 29980 80% 29983 90% 30039 95% 30039 98% 30039 99% 30039 100% 30039 (longest request)
- Document Length
- Concurrency Level
- ab will try to simulate 10 or 100 or 1000 users hitting the site at the same time. You specify this value on the command line with the -c flag.
- Time taken for tests
- This time will be variable if you're testing against a number of requests.
- Complete requests
- Not all requests are created equally, and ab will count the ones that finish as well as failures. If you're using a fixed time, you will want to check this number.
- Requests per second
- This is sort of the "money shot" of this whole testing process. When your server is under heavy load, you will often site this value as an indicator of how fast things were moving. Your aim should be to get more requests per second.
- In short, the mean is the average. However, a mean is not worth much without a standard deviation. Read the wikipedia article on mean for more info. You'll also see Median, which is literally the middle of the minimum and maximum values.
- Time per request
- If you're using concurrency, you will get two values here. The first one is how long it takes to send the collective requests. If you're using a concurrency of 10, the mean will be about 10 times larger than the mean across all concurrent requests.
- Transfer rate
- In other words, bandwidth, measured in Kilobytes. Most connections, like the speed to your home network, is measured in Kilobits, so for a better comparison, multiply this number by 8.
How do I speed up my site?
- Follow the recommendations from YSlow. One point of YSlow that's further down on the list is reduce the number of DOM elements. This makes your document length shorter, and sending a smaller file will make the file serve up faster.
- Perform a code audit of your custom modules. For that matter, perform an audit of some of the contrib modules youre using. Nobody's perfect, after all.
- Avoid logic in template files. One .tpl.php file can be executed hundreds of times during a page load, and PHP operations all take fractions of a second to interpret and perform. Try moving some logic into custom modules or into _preprocess() functions.
- Ask yourself if you've done everything you can to minimize and optimize interaction with the database. This is part of the code audit task, as well. If you are using variable_set() on a user-facing page, you may need to hit the white board and re-think the problem. Also be aware that calling cache_clear_all() is like trying to slice a tomato with a sledgehammer.
- Take a serious look at caching data in memcached or one of the tools provided to you by the Cache Router module (APC, XCache, etc.). For a different approach to this problem, check out Boost.
Getting more information
Until next time, here is some homework:
- This book was very helpful for me, learning about performance: Building Scalable Web Sites by Cal Henderson
- Drupal caching, speed and performance
- Is 'private' download method more CPU intensive?
- Drupal Performance Measurement & Benchmarking
- Server tuning considerations
- A list of performance-related Drupal Modules
- If you're getting a white screen, please explain the 16MB PHP Memory and you won't use all your RAM at once
Wow this is incredibly useful! I will definitely bookmarking this page as suggested. I appreciate that you compiled the tools and 'vocab' terms here for beginners, this should help out a ton in analyzing my work.
Wondering why you specified to "turn off your page caching" -- wouldn't you want to stress test with your page caching on if your site is largely read-only for anon users and you plan to enable page caching?
you wouldn't want to stress test with page cache on, because it will give you results that don't mean much (it's a best case scenario). your cache's aren't always hot, people post content on your site (hopefully), traffic might not be to that one type of page. if a bot is crawling your site, or people are commenting on that page, etc, page_cache isn't going to save you.
i like to generate a list of all the pages (or use a site map) and benchmark all of the pages using 5-10 concurrent users. you can usually find trouble areas of your site by looking at specific common elements between the page types (ie: a block, menu, view, panel, etc). once you know that your blogs section or organic groups pages suck,you can then profile them or use devel to track painful queries and find out why (and fix it).
it's always better to start with the worst case scenario, then work up.
[...] On Drupal Performance: Testing with Apache Benchmark [...]