Important aspects in Load & Performance Testing – 1 – Server Warm Up
Being introduced to the concept, implementation, purpose and effect of byte-code implementation a couple of months ago, i was naturally driven to a short discussion with Kirk Pepperdine on the topic of server warm-up in load and performance testing.
Basically, these are things one should know in order to understand the effect of external influences on the way your application works. You keep hearing “server warm-up“, and you know it is a good thing to do, but you never really knew why. So…why warm-up?
Basically, server warm-up relates to byte-code optimization. You cannot really speak of byte-code without speaking of compilers. And when speaking of compilers and JAVA, you get to the very bottom point of JAVA, the Virtual Machine (JVM)
The greatest advantage of JVM is it’s portability. And to facilitate this portability, every program needs to be compiled in a standardized format, which in the case of JAVA is the .class format. In order to execute .class files, the JVM uses a so called just-in-time compiler, (known as JIT), which is the technique used in most JVM’s, and is called HotSpot in the most known implementation, Sun’s JVM.
What the JVM is basically doing using this technique, is to analyze over and over again the application for specific spots that are repeated at a higher rate. These spots are then selected for optimization, “leading to high performance execution with minimum overhead for less performance-critical code. Some benchmarks show a 10-fold speed gain from this technique” (wikipedia)
So, how do you warm-up a server then ?
Well, there is no general server warm-up, as long as you are preparing for load and performance tests. You cannot “warm up” your server for a general application purpose. And since the compiler cannot fully optimize your code, and for optimization there is also de-optimization, you should be rather warming it up for testing the very specific functionality that you want to measure in terms of performance.
Fact is: you warm up your system by making the system go through the hotspots that YOU DEFINE, not the general ones. I mean, why would you run the “checkout” functionality for a shopping basket 100 times, if all you want to do is actually testing the registration? The JIT will never go through the code that is responsible for registration, hence no such code will be selected for optimization.
So, supposing you want to test the registration functionality in terms of performance, you would then start by running a small load test, configuring it for 100 virtual users, repeating it for 3 iterations. This would be then called a “warm-up” test, and be the prerequisite for increasing the load, and measuring the response times.
Warm up your application server before running the load test. You do that by purposely hitting only the functionality that you want to test, forcing the JIT to only go through that specific part of your code. You do that by creating a baseline test, using a relatively small load configuration (100 virtual users), repeating the test for a number of iterations ( 3 is my recommendation). Only after that you increase the load and start measuring response times and throughput.