Micro Focus LoadRunner (formerly HP LoadRunner)
What it does
Micro Focus LoadRunner (formerly HP LoadRunner) is a performance test tool. It tests the performance of an IT system by simulating the actions of multiple simultaneous users, which generates load on the system, and then measuring the time the system takes to respond to user commands. Generally, as the load increases, the system slows down and the response times increase. For example, Micro Focus LoadRunner can be used to simulate 1000 users performing transactions at the same time on an e-commerce website. It might begin the test at 10 users, and gradually ramp up to 1000, measuring the response times as the load is increased and showing the point at which the response times are no longer acceptable. In many cases Micro Focus LoadRunner makes performance testing a practical possibility, since such tests are difficult and expensive to perform manually.
Why it is useful
Performance testing is being recognised as an indispensable part of quality control as IT systems become more pervasive and more important to many organisations’ success. An unexpectedly high volume of users can overwhelm an IT system and for key systems even a few hours downtime can be extremely costly. Micro Focus LoadRunner enables organisations to test their IT systems in advance and thus reduce the risk performance-related failure.
LoadRunner simulates human users by creating multiple simultaneous processes known as virtual users or Vusers (in Windows, Vusers are implemented either as processes or threads). Each Vuser runs a simple program called a script. HP LoadRunner consists of four main components, the Vuser Generator, the Load Generator, the Controller, and LoadRunner Analysis.
The Vuser Generator, or VuGen, is used to create, customise and debug scripts. Scripts are normally created by recording the actions of a human user on the system under test. To record a script, the details of the application under test are entered into the VuGen and the VuGen launches the application in recording mode. The user then performs the actions of the script on the application under test with the VuGen recording each action. Once the recording is finished, the VuGen can then play the user’s actions back.
A newly recorded script must normally be customised before it can be used in a performance test. For example, if the users of a system under test are required to log in with different user credentials, each Vuser must also do so. In such a case, the script must the customised so the login credentials vary from one Vuser to the next. The VuGen provides a parameterisation mechanism to allow this kind of variation.
A script is stored in the VuGen as a program written in C or Java, and can be presented to the user as an intuitive graphical format or as the C or Java code. The graphical format is best suited for routine recording, customisation and playback. For more complex customisation, it is necessary to edit the C or Java code. The scripts can in fact be written entirely by hand with no recording at all, although it is rarely advantageous to do so.
The VuGen can play any script back, but it can only do so one at a time and therefore cannot generate significant loads. This task is performed by the Load Generator. The Load Generator creates multiple Vusers (each of which is implemented either as a thread or a process, depending on settings) and assigns a script to each. Each Vuser then runs its script for the duration of the test. The Load Generator is not an interactive program; it runs in the background with no user interface. It starts and stops the Vusers, and applies any options and settings on the instructions of the controller. A single controller can control one or many instances of the Load Generator and the Load Generator may be installed on the same machine as the Controller or on one or more different machines on the same network. Typically, several instances of the Load Generator are installed, each on a different machine and none on the same machine as the controller.
The number of Vusers that can be generated by each Load Generator instance is normally in the tens or low hundreds of Vusers. The exact number depends on various factors including the specifications of the generator machine and the network protocol used by the application under test (common protocols include HTTP, Oracle and other database protocols, Java RMI and Citrix).
The LoadRunner Controller controls the execution of performance tests. It has a user interface which allows various aspects of the test to be set up, including the number of Vusers, the script to be run by each Vuser and the duration of the test. The user interface is also used to start and stop tests and report on the progress of the test while it is running.
The central quantity measured by Micro Focus LoadRunner is system response time to user actions. For example, the time taken by a stock control system, given a part number, to retrieve the quantity in stock. The actions whose response times are measured are determined by the tester in the script. First, the tester would record a script which includes the action whose response time he/she wishes to measure, in this example the retrieval of the quantity of a part in stock. Second, during the customisation phase after recording, the tester would insert statements in the script at the point of the stock retrieval to measure the response time. In LoadRunner terms, an action whose response time is being measured is known as a ‘transaction’. Another key quantity that HP LoadRunner measures is throughput, in transactions per second. As the load on a system increases, the response times tends to increase and the throughput tends to decrease.
Beyond the basic metrics of response time and throughput, the Controller possesses a number of ‘monitors’, which are used to measure a wide range of performance metrics of the system under test. The basic monitors include CPU utilisation, memory commitment and disk I/O. Other monitors are specific to the type of application under test, for example Oracle monitors and Apache monitors.
The results of each LoadRunner test are stored in a separate database. Micro Focus LoadRunner Analysis reads this database after the end of the test and analyses and displays the results. The results include the response times, throughput and the output of any monitors which had been set up in the controller before the test. The data can be displayed in a wide variety of ways. They include graphs of varying granularity, tables and exports to Microsoft Excel or to .csv files. The data commonly analysed include the number of Vusers active in the test, the response time for each transaction and the transaction throughput over the course of the test; and key resource usage metrics for the application servers and database servers of the system under test.
When Micro Focus LoadRunner records a user’s actions as the first step of creating a script, it records the client-server network traffic and not (as most functional test tools do) the user’s GUI actions such as mouse clicks and key presses. Recording at the GUI level is unnecessary because the performance of the client itself is rarely an issue. Performance problems more commonly arise from the actions of many simultaneous users on a server. Recording at the GUI level would necessitate the replication of the GUI (in most cases a Windows environment) for each Vuser, which would consume far more resources in the Load Generator machines than only replicating the network traffic. One might expect up to three Vusers per Load Generator if simulating the GUI, compared with tens or hundreds without GUI simulation. A performance test of any given size would require far more load generators if it simulated the GUI than if it simulated only network traffic.
Automation Consultants is an Atlassian Solution Partner. You can buy HP Products from us on corporate terms via purchase order and invoice and benefit from our expert HP consultancy and integration services.
+44(0)118 932 3001
Automation Consultants shares the same working values as us – they are part of our team, literally! They supply us with up-to-speed consultants who fit in seamlessly.
Theresa Pemble, solution delivery manager, Severn Trent
Using the technical expertise of Automation Consultants' people we have been able to identify and fix problems very early in the migration process thus saving valuable time.
HP Consulting and Integration
In an agile workflow changes to source code should be deployed and tested early and often – as soon as there is a working component you try it out straight away. After working with Automation Consultants all the developer had to do was click a button and wait 20 minutes to register the changes to the advertising platform.
Malcolm Reid, head of product development, Sky IQ
Automation Consultants has been involved in several different projects, from normal performance testing to testing the capability of new hardware systems, as well as creating several innovative bespoke tools that improved productivity, delivered high standard results and added value to the test process. During all of these projects, AC has been flexible and helpful, going out of their way to resolve any difficult technical issues.
Kenneth Lagerwall ,
IS Quality Services, T-Mobile UK
Automation Consultants' technical knowledge and understanding of the Rational toolset, combined with the excellent training they provide, has meant we can forge ahead with application development.
Andy King, head of test, ONS
Automation Consultants has been able to deploy highly skilled resources at very short notice and has always met very demanding delivery deadlines
HP Consulting and Integration
- About Us
- Blog / News