Wednesday, June 17, 2009
Analyze Performance Results, Report, and Retest
Analyze the data both individually and as part of a collaborative, cross-functional technical team.
Analyze the captured data and compare the results against the metric’s acceptable or expected level to determine whether the performance of the application being tested shows a trend toward or away from the performance objectives.
If the test fails, a diagnosis and tuning Step are generally warranted.
If you fix any bottlenecks, repeat the test to validate the fix.
Performance-testing results will often enable the team to analyze components at a deep level and correlate the information back to the real world with proper test design and usage analysis.
Performance test results should enable informed architecture and business decisions.
Frequently, the analysis will reveal that, in order to completely understand the results of a particular test, additional metrics will need to be captured during subsequent test-execution cycles.
Immediately share test results and make raw data available to your entire team.
Talk to the consumers of the data to validate that the test achieved the desired results and that the data means what you think it means.
Modify the test to get new, better, or different information if the results do not represent what the test was defined to determine.
Use current results to set priorities for the next test.
Collecting metrics frequently produces very large volumes of data. Although it is tempting to reduce the amount of data, always exercise caution when using data-reduction techniques because valuable data can be lost. Most reports fall into one of the following two categories:
Technical Reports
Description of the test, including workload model and test environment.
Easily digestible data with minimal pre-processing.
Access to the complete data set and test conditions.
Short statements of observations, concerns, questions, and requests for collaboration.
Stakeholder Reports
Criteria to which the results relate.
Intuitive, visual representations of the most relevant data.
Brief verbal summaries of the chart or graph in terms of criteria.
Intuitive, visual representations of the workload model and test environment.
Access to associated technical reports, complete data sets, and test conditions.
Summaries of observations, concerns, and recommendations. The key to effective reporting is to present information of interest to the intended audience in a manner that is quick, simple, and intuitive. The following are some underlying principles for achieving effective reports:
Report early, report often.
Report visually.
Report intuitively.
Use the right statistics.
Consolidate data correctly.
Summarize data effectively.
Customize for the intended audience.
Use concise verbal summaries using strong but factual language.
Make the data available to stakeholders.
Filter out any unnecessary data.
If reporting intermediate results, include the priorities, concerns, and blocks for the next several test-execution cycles.
http://perftesting.codeplex.com/Wiki/View.aspx?title=How%20To%3A%20Conduct%20Performance%20Testing%20Core%20Steps%20for%20a%20Web%20Application
Source - Microsoft book
Execute the Performance Test
Coordinate test execution and monitoring with the team.
Validate tests, configurations, and the state of the environments and data.
Begin test execution.
While the test is running, monitor and validate scripts, systems, and data.
Upon test completion, quickly review the results for obvious indications that the test was flawed.
Archive the tests, test data, results, and other information necessary to repeat the test later if needed.
Log start and end times, the name of the result data, and so on. This will allow you to identify your data sequentially after your test is done. As you prepare to begin test execution, it is worth taking the time to double-check the following items:
Validate that the test environment matches the configuration that you were expecting and/or designed your test for.
Ensure that both the test and the test environment are correctly configured for metrics collection.
Before running the real test, execute a quick smoke test to make sure that the test script and remote performance counters are working correctly. In the context of performance testing, a smoke test is designed to determine if your application can successfully perform all of its operations under a normal load condition for a short time.
Reset the system (unless your scenario calls for doing otherwise) and start a formal test execution.
Make sure that the test scripts’ execution represents the workload model you want to simulate.
Make sure that the test is configured to collect the key performance and business indicators of interest at this time.
Source - Microsoft book
Implement the Performance Test Design
Source - Microsoft books
Configure the Performance Test Environment
Source - Microsoft books
Plan and Design Tests
Planning and designing performance tests involves identifying key usage scenarios, determining appropriate variability across users, identifying and generating test data, and specifying the metrics to be collected. Ultimately, these items will provide the foundation for workloads and workload profiles. When designing and planning tests with the intention of characterizing production performance, your goal should be to create real-world simulations in order to provide reliable data that will enable your organization to make informed business decisions. Real-world test designs will significantly increase the relevancy and usefulness of results data. Key usage scenarios for the application typically surface during the process of identifying the desired performance characteristics of the application. If this is not the case for your test project, you will need to explicitly determine the usage scenarios that are the most valuable to script. Consider the following when identifying key usage scenarios:
Contractually obligated usage scenario(s)
Usage scenarios implied or mandated by performance-testing goals and objectives
Most common usage scenario(s)
Business-critical usage scenario(s)
Performance-intensive usage scenario(s)
Usage scenarios of technical concern
Usage scenarios of stakeholder concern
High-visibility usage scenarios When identified, captured, and reported correctly, metrics provide information about how your application’s performance compares to your desired performance characteristics. In addition, metrics can help you identify problem areas and bottlenecks within your application. It is useful to identify the metrics related to the performance acceptance criteria during test design so that the method of collecting those metrics can be integrated into the tests when implementing the test design. When identifying metrics, use either specific desired characteristics or indicators that are directly or indirectly related to those characteristics.
Source - Microsoft book
Identify Performance Acceptance Criteria
Response time. For example, the product catalog must be displayed in less than three seconds.
Throughput. For example, the system must support 25 book orders per second.
Resource utilization. For example, processor utilization is not more than 75 percent. Other important resources that need to be considered for setting objectives are memory, disk input/output (I/O), and network I/O.
Source - Microsoft books
Identify the Performance Test Environment
Hardware
Configurations
Machine hardware (processor, RAM, etc.)
Network
Network architecture and end-user location
Load-balancing implications
Cluster and Domain Name System (DNS) configurations
Tools
Load-generation tool limitations
Environmental impact of monitoring tools
Software
Other software installed or running in shared or virtual environments
Software license constraints or differences
Storage capacity and seed data volume
Logging levels
External factors
Volume and type of additional traffic on the network
Scheduled or batch processes, updates, or backups
Interactions with other systems
Source from microsoft book
Core Performance-Testing Steps
The seven core performance-testing Steps can be summarized as follows.
Step 1. Identify the Test Environment.
Step 2. Identify Performance Acceptance Criteria
Step 3. Plan and Design Tests.
Step 4. Configure the Test Environment.
Step 5. Implement the Test Design.
Step 6. Execute the Test.
Step 7. Analyze Results, Report, and Retest.
What are Vusers
During run-time, threaded vusers share a common memory pool. So threading supports more Vusers per load generator.
The Status of Vusers on all load generators start from "Running", then go to "Ready" after going through the init section of the script. Vusers are "Finished" in passed or failed end status. Vusers are automatically "Stopped" when the Load Generator is overloaded.
Source from Wilsomar notes
Loadrunner Architecture
LoadRunner works by creating virtual users who take the place of real users operating client software, such as IE sending requests using the HTTP protocol to IIS or Apache web servers.
Requests from many virtual user clients are generated by "Load Generators" in order to create a load on servers.
These load generator agents are started and stopped by Mercury's "Controller" program.
The Controller controls load test runs based on "Scenarios" invoking compiled "Scripts" and associated "Run-time Settings".
Scripts are crafted using Mercury's "Virtual user script Generator" (named "V U Gen"), It generates C-language script code to be executed by virtual users by capturing network traffic between Internet application clients and servers.
During runs, the status of each machine is monitored by the Controller.
At the end of each run, the Controller combines its monitoring logs with logs obtained from load generators, and makes them available to the "Analysis" program, which can then create run result reports and graphs for Microsoft Word, Crystal Reports, or an HTML webpage browser.
Each HTML report page generated by Analysis includes a link to results in a text file which