Monday, October 18, 2010

Performance testing

Performance Testing is not even to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing. To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly.

 A simple structure of performance testing is provided below


A clearly defined set of expectations is essential for meaningful performance testing. If you don't know where you want to go in terms of the performance of the system, then it matters little which direction you take (remember Alice and the Cheshire Cat?). For example, for a Web application, you need to know at least two things:
  • expected load in terms of concurrent users or HTTP connections
  • acceptable response time
Once you know where you want to be, you can start on your way there by constantly increasing the load on the system while looking for bottlenecks. To take again the example of a Web application, these bottlenecks can exist at multiple levels, and to pinpoint them you can use a variety of tools:
  • at the application level, developers can use profilers to spot inefficiencies in their code (for example poor search algorithms)
  • at the database level, developers and DBAs can use database-specific profilers and query optimizers
  • at the operating system level, system engineers can use utilities such as top, vmstat, iostat (on Unix-type systems) and PerfMon (on Windows) to monitor hardware resources such as CPU, memory, swap, disk I/O; specialized kernel monitoring software can also be used
  • at the network level, network engineers can use packet sniffers such as tcpdump, network protocol analyzers such as ethereal, and various utilities such as netstat, MRTG, ntop, mii-tool
From a testing point of view, the activities described above all take a white-box approach, where the system is inspected and monitored "from the inside out" and from a variety of angles. Measurements are taken and analyzed, and as a result, tuning is done.

However, testers also take a black-box approach in running the load tests against the system under test. For a Web application, testers will use tools that simulate concurrent users/HTTP connections and measure response times. Some lightweight open source tools I've used in the past for this purpose are ab, siege, httperf. A more heavyweight tool I haven't used yet is OpenSTA. I also haven't used The Grinder yet, but it is high on my TODO list.

When the results of the load test indicate that performance of the system does not meet its expected goals, it is time for tuning, starting with the application and the database. You want to make sure your code runs as efficiently as possible and your database is optimized on a given OS/hardware configurations. TDD practitioners will find very useful in this context a framework such as Mike Clark's jUnitPerf, which enhances existing unit test code with load test and timed test functionality. Once a particular function or method has been profiled and tuned, developers can then wrap its unit tests in jUnitPerf and ensure that it meets performance requirements of load and timing. Mike Clark calls this "continuous performance testing". I should also mention that I've done an initial port of jUnitPerf to Python -- I called it pyUnitPerf.

If, after tuning the application and the database, the system still doesn't meet its expected goals in terms of performance, a wide array of tuning procedures is available at the all the levels discussed before. Here are some examples of things you can do to enhance the performance of a Web application outside of the application code per se:
  • Use Web cache mechanisms, such as the one provided by Squid
  • Publish highly-requested Web pages statically, so that they don't hit the database
  • Scale the Web server farm horizontally via load balancing
  • Scale the database servers horizontally and split them into read/write servers and read-only servers, then load balance the read-only servers
  • Scale the Web and database servers vertically, by adding more hardware resources (CPU, RAM, disks)
  • Increase the available network bandwidth
Performance tuning can sometimes be more art than science, due to the sheer complexity of the systems involved in a modern Web application. Care must be taken to modify one variable at a time and redo the measurements, otherwise multiple changes can have subtle interactions that are hard to qualify and repeat.
In a standard test environment such as a test lab, it will not always be possible to replicate the production server configuration. In such cases, a staging environment is used which is a subset of the production environment. The expected performance of the system needs to be scaled down accordingly.

The cycle "run load test->measure performance->tune system" is repeated until the system under test achieves the expected levels of performance. At this point, testers have a baseline for how the system behaves under normal conditions. This baseline can then be used in regression tests to gauge how well a new version of the software performs.

Another common goal of performance testing is to establish benchmark numbers for the system under test. There are many industry-standard benchmarks such as the ones published by TPC, and many hardware/software vendors will fine-tune their systems in such ways as to obtain a high ranking in the TCP top-tens. It is common knowledge that one needs to be wary of any performance claims that do not include a detailed specification of all the hardware and software configurations that were used in that particular test

Performance Testing Parameters: The term "performance testing" is often used synonymously with stress testing, load testing, reliability testing, and volume testing. Performance testing is part of system testing, but it's also a distinct level of testing. Performance testing verifies loads, volumes, and response times, as defined by requirements.

Performance Testing process: For more information about the process give me a mail i will revert with all the details ASAP.

Conducting a System Analysis
During this phase, the entire system is analyzed and broken down into specific

components.  Any one component can have a dramatic effect on the performance
of the system.  This step is critical to simulating and understanding the load and

potential problem areas.  Furthermore, it can later aid in making suggestions on
how to resolve bottlenecks and improve performance.  Some example tasks

include:
•  Identify all hardware components

•  Identify all software components
•  Identify all unrelated processes and services that may affect system

components
4.2   Usage Patterns – Understanding the Nature of the Load

No comments:

Post a Comment