I am sure must of us have so often read about the number one cause of failure for most of the testing projects. It is not the complexity, it is not the productivity, the unclear requirements of the client…most usually it is the fact that testing is still viewed as an external part of the project, one that can (and not necessarily should) be done at the end of the development phase, or can be totally ignored, trusting the development team for doing the things right. I do not think that the mentality has changed much over the years, although i must admit that the project i am currently working runs following some “out of the box” rules, where developers, product managers, testers, it guys, dba’s and so on do sit on the same chairs, at the same table, wearing the very same hats and speaking the same language: the language of productivity, efficiency and common success.
Performance Testing is no different to the functional testing when it comes to placing it in the context and timing of software development lifecycle. One aspect where it truly differs, is maybe the fact that it looks from a multi-threaded, multi-user point of view, where one user doing one action at a time does not suffice no more for a software to be called mature.
Most of the people do not realize the consequences of having a software in production, unable to work in a parallel world. Most of the stakeholders (clients) are mostly interested in things they can immediately touch, see, feel, present. They cannot be blamed for that, for each of the stakeholders has its purposes, scope, expectations.
Unfortunately performance only becomes a point of interest (most of the time) when it is too late, or when it is producing a considerable cost overhead to the whole software project. An even worse thing is that performance testing is not only started at the end (and after) of the project, but it is started by some circumstances that real clients have produced and caused. It is only then when questions start to be asked, and when the focus shifts from “beautiful” to fancy, technical words like”available, efficient, reliable, performant”
As i said, performance testing is very similar from the approach to the functional software testing. Considering that most of us switched to V-Model or Agile Models some time ago, we can pretty much approach performance testing from this point of view. At least this is the approach i would take when considering an performance test project.
Performance Component Testing
One of the best way to go along with the development, is to test along. Covering performance aspects in an early stage can drastically save time that would otherwise be spent dealing with problems like:
- sorting algorithms
- database relationships
- object allocation/deallocation
- object lifecycle
Let’s take a simple example. Let’s suppose we have a synchronized method, which would disallow two threads to access the same method for an object, at the same time. What will happen is that one thread will call the methods for the object, work with the object, and block the second thread until the first one is finished.
Suppose you deal with a login method, acquiring a JDBC connection to the database, calling a search on an unindexed user table. As long as the method does not exit with a response, the second thread will be blocked
Supposing the cumulated response time for the login method is somewhere about 5 seconds, the second thread’s response would be …you must have had it by now, the response time of the first thread + the response time of the second thread itself. And we are talking about two threads, and two threads only. See my point?
The average response time in this case would be about 7.5 seconds / request. Increasing the thread number to 3, will return the following response times
- Thread 1: 5 seconds
- Thread 2: 10 seconds
- Thread 3: 15 seconds
Altogether this returns an average response time of 10 seconds, with increasing values as the load increases. We are talking about a login component that could slow down the user’s path in your system considerably, if not found and dealt with in an incipient stage.
Point taken! The place to start when doing performance testing in a new project, is component testing. Take each of your components as they are being developed, and test them in an integrated environment as a single piece of software. And by integrated environment i mean JVM, Application Server, Database. As you probably already know, when testing in a J2EE Project, the bottleneck can be in so many points, so that testing components at a very incipient and isolated level can only ease the workload on your shoulders when dealing with a real performance problem.
Looking at the classical Java Layered Execution Model, we already detect possible bottlenecks like:
- Hardware: Memory, Bus Speed, Cpu Clock Speed, Number of CPU, Context Switching
- Operating System: Architecture (32 vs 64 bit, stack size, socket handling, etc.)
- JVM: Heap Configuration, Garbage collection strategy
- Application Server: Thread Pool, EJB Cache, EJB Pool, JDBC Pool, HTTP Acceptor thread, JMS
- Application itself
You need to be able to quickly identify the source of a problem, usually taking a top-down approach. Therefore focusing on one, and only one component at a time, allows you to quickly isolate and identify problems, on a component level, before taking the next step in testing a scenario that involves the integration of two components.
One has to see the work-path of the user as a path through a series of components, e.g. Login, search, add to cart, pay, confirm, logout. Any malfunction or bottleneck in one of this components, and the overall user feeling is affected.
I think one can look at it like JIT (Just in Time), although slightly adapted to our purpose. We get to define WHAT to test, WHEN to test, and HOW to test, before switching to the next stage, and testing a second level component in our work-path.
Basically, what i am trying to point out here, is that we need to make sure that the first level component supports a parallel environment, a user load, and can satisfy our SLA’s, or our expectations in terms of response times. This way we will not be forced to return to a retest of the component, once we find a problem in integrating this component and a component in the next levels in the work-path.