I was recently asked by a blog visitor what i do with my jmeter performance results, and how i fill those bottlenecks. I think one of my last experiences is the best answer, and is also a good starting point in discussing the subject.
So, the facts are: we have the performance results, the nice looking graphs telling us lots of values. How do you tell there’s a performance bottleneck based on these results, and, more important, how do you fill the bottleneck?
I will use one of the tests i have last performed in order to answer this question. This testplan works with 30 Virtual Users, logging in, joining a particular session, and doing some things afterwards (irelevant for the purpose of this discussion)
Let’s see the test results for this relatively small load :
Like i said, after loggin in, the users join a specific session. The business case behind the Join Session request, requires several DB transactions (mainly selects, collecting history information about the user joining the session)
The red part displays the following information: the 50% line, 60% line, 70% line, 80% line, 90% line, where the last column (the grey one) displays the average number of business transactions / second.
I will repeat the definition of the 90 % line, so it is clear for everyone reading this post. The 90 % line is the maximum response time that 90 % of all business transactions receive. In other words, 90 % of the users receive a response time of 10,3 seconds, or better.
Well, 10 seconds for a request of this kind shows a problem with the DB transactions. Therefore, the next step was enabling the eclipse persistence logging, and doing a detailed analysis of all SQL querys run against the database in one “join session” business transaction. Once i filtered this information, i grouped the querys, and run them one by one against the database, also enabling the explain plan (oracle db)
This is how i found three querys running on unindexed large tables, consuming a lot of CPU Time on the DB server. Enabling indexes on those tables dramatically improved the response times for the “join session” transaction:
As you can see, the response time improved from 10 seconds to less than 100 milliseconds.
Well, this is just one example of how i use JMeter results to detect and solve performance problems. More complicated stuff will come. I hope that by this i have answered Nirali’s question.
Cheers and have a good day
Ok, time to get this running.
As i was saying in the first part of this series, i have chosen MySql for importing the jmeter collected data. Considering you are working with builds and releases, you would like to keep track of the builds you have tested, the types of tests you have performed (load or performance), and actual results of course. Therefore, these are the tables that you would need:
- inf - (id, build) – storing build information. Builds are then associated with specific tests
- testrun - (build_id,test_id) – Linking the tests to specific builds from the table above
- testdescription - (test_id, test_date,threads,users,rampup,scenario_name,architecture_name,detailed_description) – Contains information on the date the test was run, the test configuration (number of virtual users, rampup period), the tested functionality (scenario_name) , the architecture_name (referred to as type of test – Load, Base, Monitoring), the detailed_description and whatever else you would like to put in the report, based on the information collected from the test. Remember, this table is only for storing “Test Metadata” to be used in the test report
- testresults - (id,testrun_id,t,lt,ts,s,lb,rc,rm,tn,dt,BY,hn) – This is the main component, the table storing all JMeter results. The notations have not been changed, and are exactly the ones used by JMeter. We are interested therefore in the response time, the response message, the timestamp, the request name, the answer code, the response size, and the hostname, in case you are running your tests on multiple agents like i do
I think it is pretty clear how to build your indexes, and which keys should be set as foreign keys. Otherwise let me know for a complete db structure.
So basically you have the support for storing all the information you need. In the end, you will want a first report showing you:
- Date when the test was run
- Configuration of the test – Threads, Agents, Ramp-up Time, Glassfish Threads etc.
- Performance values: Total number of transactions / Request, Minimum & Maximum Response Time / Request, Average Response Time / Request, Standard Deviation, 90 percent line, Average Transactions / Second.
- Performance Graph showing: Transactions / Second, Average Response Time / second
This will be the first report, which we will later modify to link to specialized/filtered reports, like response times for a specific request only. Let’s just start with this report, and extend it as we go along
Next post goes on what data to collect for this import, how to collect it, and most important, how to import it using Pentaho Data Integration Tools (spoon, pan, the whole kitchen )