Wednesday, June 17, 2009

Analyze Performance Results, Report, and Retest

Managers and stakeholders need more than just the results from various tests — they need conclusions, as well as consolidated data that supports those conclusions. Technical team members also need more than just results — they need analysis, comparisons, and details behind how the results were obtained. Team members of all types get value from performance results being shared more frequently. Before results can be reported, the data must be analyzed. Consider the following important points when analyzing the data returned by your performance test:
Analyze the data both individually and as part of a collaborative, cross-functional technical team.
Analyze the captured data and compare the results against the metric’s acceptable or expected level to determine whether the performance of the application being tested shows a trend toward or away from the performance objectives.
If the test fails, a diagnosis and tuning Step are generally warranted.
If you fix any bottlenecks, repeat the test to validate the fix.
Performance-testing results will often enable the team to analyze components at a deep level and correlate the information back to the real world with proper test design and usage analysis.
Performance test results should enable informed architecture and business decisions.
Frequently, the analysis will reveal that, in order to completely understand the results of a particular test, additional metrics will need to be captured during subsequent test-execution cycles.
Immediately share test results and make raw data available to your entire team.
Talk to the consumers of the data to validate that the test achieved the desired results and that the data means what you think it means.
Modify the test to get new, better, or different information if the results do not represent what the test was defined to determine.
Use current results to set priorities for the next test.
Collecting metrics frequently produces very large volumes of data. Although it is tempting to reduce the amount of data, always exercise caution when using data-reduction techniques because valuable data can be lost. Most reports fall into one of the following two categories:
Technical Reports
Description of the test, including workload model and test environment.
Easily digestible data with minimal pre-processing.
Access to the complete data set and test conditions.
Short statements of observations, concerns, questions, and requests for collaboration.
Stakeholder Reports
Criteria to which the results relate.
Intuitive, visual representations of the most relevant data.
Brief verbal summaries of the chart or graph in terms of criteria.
Intuitive, visual representations of the workload model and test environment.
Access to associated technical reports, complete data sets, and test conditions.
Summaries of observations, concerns, and recommendations. The key to effective reporting is to present information of interest to the intended audience in a manner that is quick, simple, and intuitive. The following are some underlying principles for achieving effective reports:
Report early, report often.
Report visually.
Report intuitively.
Use the right statistics.
Consolidate data correctly.
Summarize data effectively.
Customize for the intended audience.
Use concise verbal summaries using strong but factual language.
Make the data available to stakeholders.
Filter out any unnecessary data.
If reporting intermediate results, include the priorities, concerns, and blocks for the next several test-execution cycles.


http://perftesting.codeplex.com/Wiki/View.aspx?title=How%20To%3A%20Conduct%20Performance%20Testing%20Core%20Steps%20for%20a%20Web%20Application

Source - Microsoft book

Execute the Performance Test

Executing tests is what most people envision when they think about performance testing. It makes sense that the process, flow, and technical details of test execution are extremely dependent on your tools, environment, and project context. Even so, there are some fairly universal tasks and considerations that need to be kept in mind when executing tests.Much of the performance testing–related training available today treats test execution as little more than starting a test and monitoring it to ensure that the test appears to be running as expected. In reality, this Step is significantly more complex than just clicking a button and monitoring machines. Test execution can be viewed as a combination of the following sub-tasks:
Coordinate test execution and monitoring with the team.
Validate tests, configurations, and the state of the environments and data.
Begin test execution.
While the test is running, monitor and validate scripts, systems, and data.
Upon test completion, quickly review the results for obvious indications that the test was flawed.
Archive the tests, test data, results, and other information necessary to repeat the test later if needed.
Log start and end times, the name of the result data, and so on. This will allow you to identify your data sequentially after your test is done. As you prepare to begin test execution, it is worth taking the time to double-check the following items:
Validate that the test environment matches the configuration that you were expecting and/or designed your test for.
Ensure that both the test and the test environment are correctly configured for metrics collection.
Before running the real test, execute a quick smoke test to make sure that the test script and remote performance counters are working correctly. In the context of performance testing, a smoke test is designed to determine if your application can successfully perform all of its operations under a normal load condition for a short time.
Reset the system (unless your scenario calls for doing otherwise) and start a formal test execution.
Make sure that the test scripts’ execution represents the workload model you want to simulate.
Make sure that the test is configured to collect the key performance and business indicators of interest at this time.

Source - Microsoft book

Implement the Performance Test Design

The details of creating an executable performance test are extremely tool-specific. Regardless of the tool that you are using, creating a performance test typically involves scripting a single usage scenario and then enhancing that scenario and combining it with other scenarios to ultimately represent a complete workload model. Load-generation tools inevitably lag behind evolving technologies and practices. Tool creators can only build in support for the most prominent technologies and, even then, these have to become prominent before the support can be built. This often means that the biggest challenge involved in a performance-testing project is getting your first relatively realistic test implemented with users generally being simulated in such a way that the application under test cannot legitimately tell the difference between the simulated users and real users. Plan for this and do not be surprised when it takes significantly longer than expected to get it all working smoothly.
Source - Microsoft books

Configure the Performance Test Environment

Preparing the test environment, tools, and resources for test design implementation and test execution prior to features and components becoming available for test can significantly increase the amount of testing that can be accomplished during the time those features and components are available.Load-generation and application-monitoring tools are almost never as easy to get up and running as one expects. Whether issues arise from setting up isolated network environments, procuring hardware, coordinating a dedicated bank of IP addresses for IP spoofing, or version compatibility between monitoring software and server operating systems, issues always seem to arise from somewhere. Start early, to ensure that issues are resolved before you begin testing.Additionally, plan to periodically reconfigure, update, add to, or otherwise enhance your load-generation environment and associated tools throughout the project. Even if the application under test stays the same and the load-generation tool is working properly, it is likely that the metrics you want to collect will change. This frequently implies some degree of change to, or addition of, monitoring tools.
Source - Microsoft books

Plan and Design Tests

Planning and designing performance tests involves identifying key usage scenarios, determining appropriate variability across users, identifying and generating test data, and specifying the metrics to be collected. Ultimately, these items will provide the foundation for workloads and workload profiles. When designing and planning tests with the intention of characterizing production performance, your goal should be to create real-world simulations in order to provide reliable data that will enable your organization to make informed business decisions. Real-world test designs will significantly increase the relevancy and usefulness of results data. Key usage scenarios for the application typically surface during the process of identifying the desired performance characteristics of the application. If this is not the case for your test project, you will need to explicitly determine the usage scenarios that are the most valuable to script. Consider the following when identifying key usage scenarios:
Contractually obligated usage scenario(s)
Usage scenarios implied or mandated by performance-testing goals and objectives
Most common usage scenario(s)
Business-critical usage scenario(s)
Performance-intensive usage scenario(s)
Usage scenarios of technical concern
Usage scenarios of stakeholder concern
High-visibility usage scenarios When identified, captured, and reported correctly, metrics provide information about how your application’s performance compares to your desired performance characteristics. In addition, metrics can help you identify problem areas and bottlenecks within your application. It is useful to identify the metrics related to the performance acceptance criteria during test design so that the method of collecting those metrics can be integrated into the tests when implementing the test design. When identifying metrics, use either specific desired characteristics or indicators that are directly or indirectly related to those characteristics.

Source - Microsoft book

Identify Performance Acceptance Criteria

It generally makes sense to start identifying, or at least estimating, the desired performance characteristics of the application early in the development life cycle. This can be accomplished most simply by noting the performance characteristics that your users and stakeholders equate with good performance. The notes can be quantified at a later time. Classes of characteristics that frequently correlate to a user’s or stakeholder’s satisfaction typically include:
Response time. For example, the product catalog must be displayed in less than three seconds.
Throughput. For example, the system must support 25 book orders per second.
Resource utilization. For example, processor utilization is not more than 75 percent. Other important resources that need to be considered for setting objectives are memory, disk input/output (I/O), and network I/O.

Source - Microsoft books

Identify the Performance Test Environment

The environment in which your performance tests will be executed, along with the tools and associated hardware necessary to execute the performance tests, constitute the test environment. Under ideal conditions, if the goal is to determine the performance characteristics of the application in production, the test environment is an exact replica of the production environment but with the addition of load-generation and resource-monitoring tools. Exact replicas of production environments are uncommon. The degree of similarity between the hardware, software, and network configuration of the application under test conditions and under actual production conditions is often a significant consideration when deciding what performance tests to conduct and what size loads to test. It is important to remember that it is not only the physical and software environments that impact performance testing, but also the objectives of the test itself. Often, performance tests are applied against a proposed new hardware infrastructure to validate the supposition that the new hardware will address existing performance concerns.The key factor in identifying your test environment is to completely understand the similarities and differences between the test and production environments. Some critical factors to consider are:
Hardware
Configurations
Machine hardware (processor, RAM, etc.)
Network
Network architecture and end-user location
Load-balancing implications
Cluster and Domain Name System (DNS) configurations
Tools
Load-generation tool limitations
Environmental impact of monitoring tools
Software
Other software installed or running in shared or virtual environments
Software license constraints or differences
Storage capacity and seed data volume
Logging levels
External factors
Volume and type of additional traffic on the network
Scheduled or batch processes, updates, or backups
Interactions with other systems

Source from microsoft book

Core Performance-Testing Steps

The seven core performance-testing Steps can be summarized as follows.


Step 1. Identify the Test Environment.
Step 2. Identify Performance Acceptance Criteria
Step 3. Plan and Design Tests.
Step 4. Configure the Test Environment.
Step 5. Implement the Test Design.
Step 6. Execute the Test.
Step 7. Analyze Results, Report, and Retest.

What are Vusers

Load generators are controlled by VuGen scripts which issue non-GUI API calls using the same protocols as the client under test. But WinRunner GUI Vusers emulate keystrokes, mouse clicks, and other User Interface actions on the client being tested Only one GUI user can run from a machine unless LoadRunner Terminal Services Manager manages remote machines with Terminal Server Agent enabled and logged into a Terminal Services Client session.
During run-time, threaded vusers share a common memory pool. So threading supports more Vusers per load generator.
The Status of Vusers on all load generators start from "Running", then go to "Ready" after going through the init section of the script. Vusers are "Finished" in passed or failed end status. Vusers are automatically "Stopped" when the Load Generator is overloaded.

Source from Wilsomar notes

Loadrunner Architecture

LoadRunner works by creating virtual users who take the place of real users operating client software, such as IE sending requests using the HTTP protocol to IIS or Apache web servers.
Requests from many virtual user clients are generated by "Load Generators" in order to create a load on servers.

These load generator agents are started and stopped by Mercury's "Controller" program.
The Controller controls load test runs based on "Scenarios" invoking compiled "Scripts" and associated "Run-time Settings".
Scripts are crafted using Mercury's "Virtual user script Generator" (named "V U Gen"), It generates C-language script code to be executed by virtual users by capturing network traffic between Internet application clients and servers.
During runs, the status of each machine is monitored by the Controller.
At the end of each run, the Controller combines its monitoring logs with logs obtained from load generators, and makes them available to the "Analysis" program, which can then create run result reports and graphs for Microsoft Word, Crystal Reports, or an HTML webpage browser.
Each HTML report page generated by Analysis includes a link to results in a text file which